Updates from: 11/23/2022 02:11:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
After you configure the provisioning agent and ECMA host, it's time to test conn
![Screenshot that shows that the ECMA service is running.](./media/on-premises-ecma-troubleshoot/tshoot-1.png)
- 2. Go to the folder where the ECMA host was installed by selecting **Troubleshooting** > **Scripts** > **TestECMA2HostConnection**. Run the script. This script sends a SCIM GET or POST request to validate that the ECMA Connector Host is operating and responding to requests. It should be run on the same computer as the ECMA Connector Host service itself.
+ 2. Check that the ECMA Connector Host service is responding to requests.
+ 1. On the server with the agent installed, launch PowerShell.
+ 1. Change to the folder where the ECMA host was installed, such as `C:\Program Files\Microsoft ECMA2Host`.
+ 1. Change to the subdirectory `Troubleshooting`.
+ 1. Run the script `TestECMA2HostConnection.ps1` in that directory. Provide as arguments the connector name and the secret token when prompted.
+ ```
+ PS C:\Program Files\Microsoft ECMA2Host\Troubleshooting> .\TestECMA2HostConnection.ps1
+ Supply values for the following parameters:
+ ConnectorName: CORPDB1
+ SecretToken: ************
+ ```
+ 1. This script sends a SCIM GET or POST request to validate that the ECMA Connector Host is operating and responding to requests. If the output does not show that an HTTP connection was successful, then check that the service is running and that the correct secret token was provided.
+ 3. Ensure that the agent is active by going to your application in the Azure portal, selecting **admin connectivity**, selecting the agent dropdown list, and ensuring your agent is active. 4. Check if the secret token provided is the same as the secret token on-premises. Go to on-premises, provide the secret token again, and then copy it into the Azure portal. 5. Ensure that you've assigned one or more agents to the application in the Azure portal. 6. After you assign an agent, you need to wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes.
- 7. Ensure that you're using a valid certificate. Go to the **Settings** tab of the ECMA host to generate a new certificate.
+ 7. Ensure that you're using a valid certificate that has not expired. Go to the **Settings** tab of the ECMA host to view the certificate expiration date. If the certificate has expired, click `Generate certificate` to generate a new certificate.
8. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**.
- 1. If you continue to see `The ECMA host is currently importing data from the target application` even after restarting the ECMA Connector Host and the provisioning agent, and waiting for the initial import to complete, then you may need to cancel and re-start configuring provisioning to the application in the Azure portal.
+ 1. If you continue to see `The ECMA host is currently importing data from the target application` even after restarting the ECMA Connector Host and the provisioning agent, and waiting for the initial import to complete, then you may need to cancel and start over configuring provisioning to the application in the Azure portal.
1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format. ```
After you configure the provisioning agent and ECMA host, it's time to test conn
## Unable to configure the ECMA host, view logs in Event Viewer, or start the ECMA host service
-To resolve the following issues, run the ECMA host as an admin:
+To resolve the following issues, run the ECMA host configuration wizard as an administrator:
* I get an error when I open the ECMA host wizard. ![Screenshot that shows an ECMA wizard error.](./media/on-premises-ecma-troubleshoot/tshoot-2.png)
-* I can configure the ECMA host wizard, but I can't see the ECMA host logs. In this case, you need to open the host as an admin and set up a connector end to end. This step can be simplified by exporting an existing connector and importing it again.
+* I can configure the ECMA host wizard, but I can't see the ECMA host logs. In this case, you need to open the ECMA Host configuration wizard as an administrator and set up a connector end to end. This step can be simplified by exporting an existing connector and importing it again.
![Screenshot that shows host logs.](./media/on-premises-ecma-troubleshoot/tshoot-3.png)
To resolve the following issues, run the ECMA host as an admin:
## Turn on verbose logging
-By default, `switchValue` for the ECMA Connector Host is set to `Verbose`. This will emit detailed logging that will help you troubleshoot issues. You can change the verbosity to `Error` if you would like to limit the number of logs emitted to only errors. Wen using the SQL connector without Windows Integrated Auth, we recommend setting the `switchValue` to `Error` as it will ensure that the connection string is not emitted in the logs. In order to change the verbosity to error, please update the `switchValue` to "Error" in both places as shown below.
+By default, `switchValue` for the ECMA Connector Host is set to `Verbose`. This setting will emit detailed logging that will help you troubleshoot issues. You can change the verbosity to `Error` if you would like to limit the number of logs emitted to only errors. Wen using the SQL connector without Windows Integrated Auth, we recommend setting the `switchValue` to `Error` as it will ensure that the connection string is not emitted in the logs. In order to change the verbosity to error, update the `switchValue` to "Error" in both places as shown below.
The file location for verbose service logging is C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config. ```
The file location for wizard logging is C:\Program Files\Microsoft ECMA2Host\Wiz
## Query the ECMA Host Cache The ECMA Host has a cache of users in your application that is updated according to the schedule you specify in the properties page of the ECMA Host wizard. In order to query the cache, perform the steps below:+ 1. Set the Debug flag to `true`.
-2. Restart the ECMA Host service.
-3. Query this endpoint from the server the ECMA Host is installed on, replacing `{connector name}` with the name of your connector, specified in the properties page of the ECMA Host. `https://localhost:8585/ecma2host_{connectorName}/scim/cache`
-Please be aware that setting the debug flag to `true` disables authentication on the ECMA Host. You will want to set it back to `false` and restart the ECMA Host service once you are done querying the cache.
+ Please be aware that setting the debug flag to `true` disables authentication on the ECMA Host. You will need to set it back to `false` and restart the ECMA Host service once you are done querying the cache.
-The file location for verbose service logging is C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config.
- ```
- <?xml version="1.0" encoding="utf-8"?>
- <configuration>
+ The file location for verbose service logging is `C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config`.
+ ```
+ <?xml version="1.0" encoding="utf-8"?>
+ <configuration>
<startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6" /> </startup>
The file location for verbose service logging is C:\Program Files\Microsoft ECMA
<add key="Debug" value="true" /> </appSettings>
- ```
+ ```
+
+2. Restart the `Microsoft ECMA2Host` service.
+1. Wait for the ECMA Host to connect to the target systems and re-read its cache from each of the connected systems. If there are many users in those connected systems, this import process could take several minutes.
+1. Query this endpoint from the server the ECMA Host is installed on, replacing `{connector name}` with the name of your connector, specified in the properties page of the ECMA Host: `https://localhost:8585/ecma2host_{connectorName}/scim/cache`.
+
+ 1. On the server with the agent installed, launch PowerShell.
+ 1. Change to the folder where the ECMA host was installed, such as `C:\Program Files\Microsoft ECMA2Host`.
+ 1. Change to the subdirectory `Troubleshooting`.
+ 1. Run the script `TestECMA2HostConnection.ps1` in that directory, and provide as arguments the connector name and the `ObjectTypePath` value `cache`. When prompted, type the secret token configured for that connector.
+ ```
+ PS C:\Program Files\Microsoft ECMA2Host\Troubleshooting> .\TestECMA2HostConnection.ps1 -ConnectorName CORPDB1 -ObjectTypePath cache
+ Supply values for the following parameters:
+ SecretToken: ************
+ ```
+ 1. This script sends a SCIM GET request to validate that the ECMA Connector Host is operating and responding to requests. If the output does not show that an HTTP connection was successful, then check that the service is running and that the correct secret token was provided.
+
+1. Set the Debug flag back to `false` or remove the setting once you are done querying the cache.
+2. Restart the `Microsoft ECMA2Host` service.
++ ## Target attribute is missing The provisioning service automatically discovers attributes in your target application. If you see that a target attribute is missing in the target attribute list in the Azure portal, perform the following troubleshooting step:
- 1. Review the **Select Attributes** page of your ECMA host configuration to check that the attribute has been selected to be exposed to the Azure portal.
- 1. Ensure that the ECMA host service is turned on.
+ 1. Review the **Select Attributes** page of your ECMA host configuration to check that the attribute has been selected, so that it will be exposed to the Azure portal.
+ 1. Ensure that the ECMA host service is running.
1. Review the ECMA host logs to check that a /schemas request was made, and review the attributes in the response. This information will be valuable for support to troubleshoot the issue. ## Collect logs from Event Viewer as a zip file
-Go to the folder where the ECMA host was installed by selecting **Troubleshooting** > **Scripts**. Run the `CollectTroubleshootingInfo` script as an admin. You can use it to capture the logs in a zip file and export them.
+You can use an included script to capture the event logs in a zip file and export them.
+
+ 1. On the server with the agent installed, right click on PowerShell in the Start menu and select to `Run as administrator`.
+ 1. Change to the folder where the ECMA host was installed, such as `C:\Program Files\Microsoft ECMA2Host`.
+ 1. Change to the subdirectory `Troubleshooting`.
+ 1. Run the script `CollectTroubleshootingInfo.ps1` in that directory.
+ 1. The script will create a ZIP file in that directory containing the event logs.
## Review events in Event Viewer
After the ECMA Connector Host schema mapping has been configured, start the serv
| Error | Resolution | | -- | -- | | Could not load file or assembly 'file:///C:\Program Files\Microsoft ECMA2Host\Service\ECMA\Cache\8b514472-c18a-4641-9a44-732c296534e8\Microsoft.IAM.Connector.GenericSql.dll' or one of its dependencies. Access is denied. | Ensure that the network service account has 'full control' permissions over the cache folder. |
-| Invalid LDAP style of object's DN. DN: username@domain.com" or `Target Site: ValidByLdapStyle` | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.|
+| Invalid LDAP style of object's DN. DN: username@domain.com" or `Target Site: ValidByLdapStyle` | Ensure the 'DN is Anchor' checkbox is not checked in the 'connectivity' page of the ECMA host. Ensure the 'autogenerated' checkbox is selected in the 'object types' page of the ECMA host. For more information, see [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names).|
## Understand incoming SCIM requests Requests made by Azure AD to the provisioning agent and connector host use the SCIM protocol. Requests made from the host to apps use the protocol the app supports. The requests from the host to the agent to Azure AD rely on SCIM. You can learn more about the SCIM implementation in [Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory](use-scim-to-provision-users-and-groups.md).
-At the beginning of each provisioning cycle, before performing on-demand provisioning and when doing the test connection, the Azure AD provisioning service generally makes a get-user call for a [dummy user](use-scim-to-provision-users-and-groups.md#request-3) to ensure the target endpoint is available and returning SCIM-compliant responses.
+The Azure AD provisioning service generally makes a get-user call to check for a [dummy user](use-scim-to-provision-users-and-groups.md#request-3) in three situations: at the beginning of each provisioning cycle, before performing on-demand provisioning and when **test connection** is selected. This check ensures the target endpoint is available and returning SCIM-compliant responses to the Azure AD provisioning service.
## How do I troubleshoot the provisioning agent?
By using Azure AD, you can monitor the provisioning service in the cloud and col
### I am getting an Invalid LDAP style DN error when trying to configure the ECMA Connector Host with SQL By default, the generic SQL connector expects the DN to be populated using the LDAP style (when the 'DN is anchor' attribute is left unchecked in the first connectivity page). In the error message `Invalid LDAP style DN` or `Target Site: ValidByLdapStyle`, you may see that the DN field contains a user principal name (UPN), rather than an LDAP style DN that the connector expects.
-To resolve this, ensure that **Autogenerated** is selected on the object types page when you configure the connector.
+To resolve this error message, ensure that **Autogenerated** is selected on the object types page when you configure the connector.
-See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.
+For more information, see [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names).
## Next steps
active-directory On Premises Ldap Connector Prepare Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-prepare-directory.md
+
+ Title: Preparing for Azure AD Provisioning to Active Directory Lightweight Directory Services (preview)
+description: This document describes how to configure Azure AD to provision users into Active Directory Lightweight Directory Services as an example of an LDAP directory.
+++++++ Last updated : 11/15/2022++++
+# Prepare Active Directory Lightweight Directory Services for provisioning from Azure AD
+
+The following documentation provides tutorial information demonstrating how to prepare an Active Directory Lightweight Directory Services (AD LDS) installation. This can be used as an example LDAP directory for troubleshooting or to demonstrate [how to provision users from Azure AD into an LDAP directory](on-premises-ldap-connector-configure.md).
+
+## Prepare the LDAP directory
+
+If you do not already have a directory server, the following information is provided to help create a test AD LDS environment. This setup uses PowerShell and the ADAMInstall.exe with an answers file. This document does not cover in-depth information on AD LDS. For more information, see [Active Directory Lightweight Directory Services](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/hh831593(v=ws.11)).
+
+If you already have AD LDS or another directory server, you can skip this content, and continue at the [Tutorial: ECMA Connector Host generic LDAP connector](on-premises-ldap-connector-configure.md) for installing and configuring the ECMA connector host.
+
+### Create an SSL certificate, a test directory and install AD LDS.
+Use the PowerShell script from [Appendix A](#appendix-ainstall-ad-lds-powershell-script). The script performs the following actions:
+ 1. Creates a self-signed certificate that will be used by the LDAP connector.
+ 2. Creates a directory for the feature install log.
+ 3. Exports the certificate in the personal store to the directory.
+ 4. Imports the certificate to the trusted root of the local machine.
+ 5. Installs the AD LDS role on our virtual machine.
+
+On the Windows Server virtual machine where you are using to test the LDAP connector, edit the script to match your computer name, and then run the script using Windows PowerShell with administrative privileges.
+
+### Create an instance of AD LDS
+Now that the role has been installed, you need to create an instance of AD LDS. To create an instance, you can use the answer file provided below. This file will install the instance quietly without using the UI.
+
+Copy the contents of [Appendix B](#appendix-banswer-file) in to notepad and save it as **answer.txt** in **"C:\Windows\ADAM"**.
+
+Now open a cmd prompt with administrative privileges and run the following executable:
+
+```
+C:\Windows\ADAM> ADAMInstall.exe /answer:answer.txt
+```
+
+### Create containers and a service account for AD LDS
+The use the PowerShell script from [Appendix C](#appendix-cpopulate-ad-lds-powershell-script). The script performs the following actions:
+ 1. Creates a container for the service account that will be used with the LDAP connector.
+ 1. Creates a container for the cloud users, where users will be provisioned to.
+ 1. Creates the service account in AD LDS.
+ 1. Enables the service account.
+ 1. Adds the service account to the AD LDS Administrators role.
+
+On the Windows Server virtual machine, you are using to test the LDAP connector run the script using Windows PowerShell with administrative privileges.
+
+### Grant the NETWORK SERVICE read permissions to the SSL certificate
+In order to enable SSL to work, you need to grant the NETWORK SERVICE read permissions to our newly created certificate. To grant permissions, use the following steps.
+
+ 1. Navigate to **C:\Program Data\Microsoft\Crypto\Keys**.
+ 2. Right-click on the system file located here. It will be a guid. This container is storing our certificate.
+ 1. Select properties.
+ 1. At the top, select the **Security** tab.
+ 1. Select **Edit**.
+ 1. Click **Add**.
+ 1. In the box, enter **Network Service** and select **Check Names**.
+ 1. Select **NETWORK SERVICE** from the list and click **OK**.
+ 1. Click **Ok**.
+ 1. Ensure the Network service account has read and read & execute permissions and click **Apply** and **OK**.
+
+### Verify SSL connectivity with AD LDS
+Now that we have configured the certificate and granted the network service account permissions, test the connectivity to verify that it is working.
+ 1. Open Server Manager and select AD LDS on the left
+ 2. Right-click your instance of AD LDS and select ldp.exe from the pop-up.
+ [![Screenshot that shows the Ldp tool location.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-1.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-1.png#lightbox)</br>
+ 3. At the top of ldp.exe, select **Connection** and **Connect**.
+ 4. Enter the following information and click **OK**.
+ - Server: APP3
+ - Port: 636
+ - Place a check in the SSL box
+ [![Screenshot that shows the Ldp tool connection configuration.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-2.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-2.png#lightbox)</br>
+ 5. You should see a response similar to the screenshot below.
+ [![Screenshot taht shows the Ldp tool connection configuration success.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-3.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-3.png#lightbox)</br>
+ 6. At the top, under **Connection** select **Bind**.
+ 7. Leave the defaults and click **OK**.
+ [![Screenshot that shows the Ldp tool bind operation.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-4.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-4.png#lightbox)</br>
+ 8. You should now, successfully bind to the instance.
+ [![Screenshot that shows the Ldp tool bind success.](../../../includes/media/active-directory-app-provisioning-ldap/ldp-5.png)](../../../includes/media/active-directory-app-provisioning-ldap/ldp-5.png#lightbox)</br>
+
+### Disable the local password policy
+Currently, the LDAP connector provisions users with a blank password. This provisioning will not satisfy the local password policy on our server so we are going to disable it for testing purposes. To disable password complexity, on a non-domain-joined server, use the following steps.
+
+>[!IMPORTANT]
+>Because on-going password sync is not a feature of on-premises LDAP provisioning, Microsoft recommends that AD LDS is used specifically with federated applications, when used in conjunction with AD DS, or when updating existing users in an instance of AD LDS.
+
+ 1. On the server, click **Start**, **Run**, and then **gpedit.msc**
+ 2. On the **Local Group Policy editor**, navigate to Computer Configuration > Windows Settings > Security Settings > Account Policies > Password Policy
+ 3. On the right, double-click **Password must meet complexity requirements** and select **Disabled**.
+ [![Screenshot of the complexity requirements setting.](../../../includes/media/active-directory-app-provisioning-ldap/local-1.png)](../../../includes/media/active-directory-app-provisioning-ldap/local-1.png#lightbox)</br>
+ 5. Click **Apply** and **Ok**
+ 6. Close the Local Group Policy editor
+
+
+Next, continue in the guidance to [provision users from Azure AD into an LDAP directory](on-premises-ldap-connector-configure.md) to download and configure the provisioning agent.
+
+## Appendix A - Install AD LDS PowerShell script
+The following PowerShell script can be used to automate the installation of Active Directory Lightweight Directory Services. You'll need to edit the script to match your environment; in particular, change `APP3` to the hostname of your computer.
+++
+```powershell
+# Filename: 1_SetupADLDS.ps1
+# Description: Creates a certificate that will be used for SSL and installs Active Directory Lighetweight Directory Services.
+#
+# DISCLAIMER:
+# Copyright (c) Microsoft Corporation. All rights reserved. This
+# script is made available to you without any express, implied or
+# statutory warranty, not even the implied warranty of
+# merchantability or fitness for a particular purpose, or the
+# warranty of title or non-infringement. The entire risk of the
+# use or the results from the use of this script remains with you.
+#
+#
+#
+#
+#Declare variables
+$DNSName = 'APP3'
+$CertLocation = 'cert:\LocalMachine\MY'
+$logpath = "c:\"
+$dirname = "test"
+$dirtype = "directory"
+$featureLogPath = "c:\test\featurelog.txt"
+
+#Create a new self-signed certificate
+New-SelfSignedCertificate -DnsName $DNSName -CertStoreLocation $CertLocation
+
+#Create directory
+New-Item -Path $logpath -Name $dirname -ItemType $dirtype
+
+#Export the certifcate from the local machine personal store
+Get-ChildItem -Path cert:\LocalMachine\my | Export-Certificate -FilePath c:\test\allcerts.sst -Type SST
+
+#Import the certificate in to the trusted root
+Import-Certificate -FilePath "C:\test\allcerts.sst" -CertStoreLocation cert:\LocalMachine\Root
++
+#Install AD LDS
+start-job -Name addFeature -ScriptBlock {
+Add-WindowsFeature -Name "ADLDS" -IncludeAllSubFeature -IncludeManagementTools
+ }
+Wait-Job -Name addFeature
+Get-WindowsFeature | Where installed >>$featureLogPath
++
+ ```
+
+## Appendix B - Answer file
+This file is used to automate and create an instance of AD LDS. You will edit this file to match your environment; in particular, change `APP3` to the hostname of your server.
+
+>[!IMPORTANT]
+> This script uses the local administrator for the AD LDS service account and has its password hard-coded in the answers. This action is for **testing only** and should never be used in a production environment.
+>
+> If you are installing AD LDS on a domain controller and not a member or standalone server, you will need to change the LocalLDAPPortToListenOn and LocalSSLPortToListonOn to something other than the well-known ports for LDAP and LDAP over SSL. For example, LocalLDAPPortToListenOn=51300 and LocalSSLPortToListenOn=51301.
+
+```
+ [ADAMInstall]
+ InstallType=Unique
+ InstanceName=AD-APP-LDAP
+ LocalLDAPPortToListenOn=389
+ LocalSSLPortToListenOn=636
+ NewApplicationPartitionToCreate=CN=App,DC=contoso,DC=lab
+ DataFilesPath=C:\Program Files\Microsoft ADAM\AD-APP-LDAP\data
+ LogFilesPath=C:\Program Files\Microsoft ADAM\AD-APP-LDAP\data
+ ServiceAccount=APP3\Administrator
+ ServicePassword=Pa$$Word1
+ AddPermissionsToServiceAccount=Yes
+ Administrator=APP3\Administrator
+ ImportLDIFFiles="MS-User.LDF"
+ SourceUserName=APP3\Administrator
+ SourcePassword=Pa$$Word1
+ ```
+## Appendix C - Populate AD LDS PowerShell script
+PowerShell script to populate AD LDS with containers and a service account.
+++
+```powershell
+# Filename: 2_PopulateADLDS.ps1
+# Description: Populates our AD LDS environment with 2 containers and a service account
+
+# DISCLAIMER:
+# Copyright (c) Microsoft Corporation. All rights reserved. This
+# script is made available to you without any express, implied or
+# statutory warranty, not even the implied warranty of
+# merchantability or fitness for a particular purpose, or the
+# warranty of title or non-infringement. The entire risk of the
+# use or the results from the use of this script remains with you.
+#
+#
+#
+#
+# Create service accounts container
+New-ADObject -Name "ServiceAccounts" -Type "container" -Path "CN=App,DC=contoso,DC=lab" -Server "APP3:389"
+Write-Output "Creating ServiceAccounts container"
+
+# Create cloud users container
+New-ADObject -Name "CloudUsers" -Type "container" -Path "CN=App,DC=contoso,DC=lab" -Server "APP3:389"
+Write-Output "Creating CloudUsers container"
+
+# Create a new service account
+New-ADUser -name "svcAccountLDAP" -accountpassword (ConvertTo-SecureString -AsPlainText 'Pa$$1Word' -Force) -Displayname "LDAP Service Account" -server 'APP3:389' -path "CN=ServiceAccounts,CN=App,DC=contoso,DC=lab"
+Write-Output "Creating service account"
+
+# Enable the new service account
+Enable-ADAccount -Identity "CN=svcAccount,CN=ServiceAccounts,CN=App,DC=contoso,DC=lab" -Server "APP3:389"
+Write-Output "Enabling service account"
+
+# Add the service account to the Administrators role
+Get-ADGroup -Server "APP3:389" -SearchBase "CN=Administrators,CN=Roles,CN=App,DC=contoso,DC=lab" -Filter "name -like 'Administrators'" | Add-ADGroupMember -Members "CN=svcAccount,CN=ServiceAccounts,CN=App,DC=contoso,DC=lab"
+Write-Output "Adding service accounnt to Administrators role"
++
+ ```
+
+## Next steps
+
+- [Tutorial: ECMA Connector Host generic LDAP connector](on-premises-ldap-connector-configure.md)
active-directory Msal Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-configuration.md
The list of authorities that are known and trusted by you. In addition to the au
|--|-|--|--| | `type` | String | Yes | Mirrors the audience or account type your app targets. Possible values: `AAD`, `B2C` | | `audience` | Object | No | Only applies when type=`AAD`. Specifies the identity your app targets. Use the value from your app registration |
-| `authority_url` | String | Yes | Required only when type=`B2C`. Specifies the authority URL or policy your app should use |
+| `authority_url` | String | Yes | Required only when type=`B2C`. Optional for type=`AAD`. Specifies the authority URL or policy your app should use |
| `default` | boolean | Yes | A single `"default":true` is required when one or more authorities is specified. | #### Audience Properties
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
The solution outlined in this article works in all of these browsers, or anywher
## Overview of the solution
-To continue authenticating users in SPAs, app developers must use the [authorization code flow](v2-oauth2-auth-code-flow.md). In the auth code flow, the identity provider issues a code, and the SPA redeems the code for an access token and a refresh token. When the app requires additional tokens, it can use the [refresh token flow](v2-oauth2-auth-code-flow.md#refresh-the-access-token) to get new tokens. Microsoft Authentication Library (MSAL) for JavaScript v2.0, implements the authorization code flow for SPAs and, with minor updates, is a drop-in replacement for MSAL.js 1.x.
+To continue authenticating users in SPAs, app developers must use the [authorization code flow](v2-oauth2-auth-code-flow.md). In the auth code flow, the identity provider issues a code, and the SPA redeems the code for an access token and a refresh token. When the app requires new tokens, it can use the [refresh token flow](v2-oauth2-auth-code-flow.md#refresh-the-access-token) to get new tokens. Microsoft Authentication Library (MSAL) for JavaScript v2.0, implements the authorization code flow for SPAs and, with minor updates, is a drop-in replacement for MSAL.js 1.x.
For the Microsoft identity platform, SPAs and native clients follow similar protocol guidance:
For the Microsoft identity platform, SPAs and native clients follow similar prot
- PKCE is _required_ for SPAs on the Microsoft identity platform. PKCE is _recommended_ for native and confidential clients. - No use of a client secret
-SPAs have two additional restrictions:
+SPAs have two more restrictions:
- [The redirect URI must be marked as type `spa`](v2-oauth2-auth-code-flow.md#redirect-uris-for-single-page-apps-spas) to enable CORS on login endpoints. - Refresh tokens issued through the authorization code flow to `spa` redirect URIs have a 24-hour lifetime rather than a 90-day lifetime.
There are two ways of accomplishing sign-in:
- Consider having a pre-load sequence in the app that checks for a login session and redirects to the login page before the app fully unpacks and executes the JavaScript payload. - **Popups** - If the user experience (UX) of a full page redirect doesn't work for the application, consider using a popup to handle authentication.
- - When the popup finishes redirecting to the application after authentication, code in the redirect handler will store the code and tokens in local storage for the application to use. MSAL.js supports popups for authentication, as do most libraries.
+ - When the popup finishes redirecting to the application after authentication, code in the redirect handler will store the code, and tokens in local storage for the application to use. MSAL.js supports popups for authentication, as do most libraries.
- Browsers are decreasing support for popups, so they may not be the most reliable option. User interaction with the SPA before creating the popup may be needed to satisfy browser requirements.
- Apple [describes a popup method](https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/) as a temporary compatibility fix to give the original window access to third-party cookies. While Apple may remove this transferral of permissions in the future, it will not impact the guidance here.
+ Apple [describes a popup method](https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/) as a temporary compatibility fix to give the original window access to third-party cookies. While Apple may remove this transferal of permissions in the future, it will not impact the guidance here.
Here, the popup is being used as a first party navigation to the login page so that a session is found and an auth code can be provided. This should continue working into the future. ### Using iframes
-A common pattern in web apps is to use an iframe to embed one app inside anotherd: the top-level frame handles authenticating the user and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow.
+A common pattern in web apps is to use an iframe to embed one app inside another: the top-level frame handles authenticating the user and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow. However, there are couple of caveats to this assumption irrespective of whether third-party cookies are enabled or blocked in the browser.
Silent token acquisition no longer works when third-party cookies are blocked - the application embedded in the iframe must switch to using popups to access the user's session as it can't navigate to the login page.
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
Previously updated : 08/04/2021 Last updated : 11/22/2022
Policies | <ul><li>Read all properties of policies<li>Manage all properties of o
## Restrict member users' default permissions
-It's possible to add restrictions to users' default permissions. You can use this feature if you don't want all users in the directory to have access to the Azure AD admin portal/directory.
-
-For example, a university has many users in its directory. The admin might not want all of the students in the directory to be able to see the full directory and violate other students' privacy. The use of this feature is optional and at the discretion of the Azure AD administrator.
+It's possible to add restrictions to users' default permissions.
You can restrict default permissions for member users in the following ways:
+> [!CAUTION]
+> Using the **Restrict access to Azure AD administration portal** switch **is NOT a security measure**. For more information on the functionality, see the table below.
+ | Permission | Setting explanation | | - | |
-| **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals by adding them to the application developer role. |
+| **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals, by adding them to the application developer role. |
| **Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md). | | **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |
-| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It does not restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It does not restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Do not use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. |
-| **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag does not prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. |
-
-> [!NOTE]
-> It's assumed that the average user would only use the portal to access Azure AD, and not use PowerShell or the Azure CLI to access their resources. Currently, restricting access to users' default permissions occurs only when users try to access the directory within the Azure portal.
+| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It doesn't restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It doesn't restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this option to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Don't use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. |
+| **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag doesn't prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. |
## Restrict guest users' default permissions
You can restrict default permissions for guest users in the following ways.
Permission | Setting explanation - |
-**Guest user access restrictions** | Setting this option to **Guest users have the same access as members** grants all member user permissions to guest users by default.<p>Setting this option to **Guest user access is restricted to properties and memberships of their own directory objects** restricts guest access to only their own user profile by default. Access to other users is no longer allowed, even when they're searching by user principal name, object ID, or display name. Access to group information, including groups memberships, is also no longer allowed.<p>This setting does not prevent access to joined groups in some Microsoft 365 services like Microsoft Teams. To learn more, see [Microsoft Teams guest access](/MicrosoftTeams/guest-access).<p>Guest users can still be added to administrator roles regardless of this permission setting.
+**Guest user access restrictions** | Setting this option to **Guest users have the same access as members** grants all member user permissions to guest users by default.<p>Setting this option to **Guest user access is restricted to properties and memberships of their own directory objects** restricts guest access to only their own user profile by default. Access to other users is no longer allowed, even when they're searching by user principal name, object ID, or display name. Access to group information, including groups memberships, is also no longer allowed.<p>This setting doesn't prevent access to joined groups in some Microsoft 365 services like Microsoft Teams. To learn more, see [Microsoft Teams guest access](/MicrosoftTeams/guest-access).<p>Guest users can still be added to administrator roles regardless of this permission setting.
**Guests can invite** | Setting this option to **Yes** allows guests to invite other guests. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md). **Members can invite** | Setting this option to **Yes** allows non-admin members of your directory to invite guests. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md). **Admins and users in the guest inviter role can invite** | Setting this option to **Yes** allows admins and users in the guest inviter role to invite guests. When you set this option to **Yes**, users in the guest inviter role will still be able to invite guests, regardless of the **Members can invite** setting. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Last updated 08/26/2022
-+
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Use your Microsoft Azure Active Directory account with Atlassian JIRA server to
To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 8.22.1 or JIRA Service Desk 3.0 to 4.22.1 should be installed and configured on Windows 64-bit version.
+- JIRA Core and Software 6.4 to 9.4.0 or JIRA Service Desk 3.0 to 4.22.1 should be installed and configured on Windows 64-bit version.
- JIRA server is HTTPS enabled. - Note the supported versions for JIRA Plugin are mentioned in below section. - JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD.
To get started, you need the following items:
## Supported versions of JIRA
-* JIRA Core and Software: 6.4 to 8.22.1.
+* JIRA Core and Software: 6.4 to 9.4.0.
* JIRA Service Desk 3.0 to 4.22.1. * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md).
active-directory Starleaf Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/starleaf-provisioning-tutorial.md
Before you configure and enable automatic user provisioning, you should decide w
Before you configure StarLeaf for automatic user provisioning with Azure AD, you will need to configure SCIM provisioning in StarLeaf:
-1. Sign in to your [StarLeaf Admin Console](https://portal.starleaf.com/#page=login). Navigate to **Integrations** > **Add integration**.
+1. Sign in to your StarLeaf Admin Console. Navigate to **Integrations** > **Add integration**.
![Screenshot of the StarLeaf Admin Console with the Integrations and Add integration options called out.](media/starleaf-provisioning-tutorial/image00.png)
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
You need to establish an authentication mechanism when using [Azure Container Re
You can set up the AKS to ACR integration using the Azure CLI or Azure PowerShell. The AKS to ACR integration assigns the [**AcrPull** role][acr-pull] to the [Azure Active Directory (Azure AD) **managed identity**][aad-identity] associated with your AKS cluster.
+> [!IMPORTANT]
+> There is a latency issue with Azure Active Directory groups when attaching ACR. If the AcrPull role is granted to an Azure AD group and the kubelet identity is added to the group to complete the RBAC configuration, there might be up to a one-hour delay before the RBAC group takes effect. We recommended you to use the [Bring your own kubelet identity][byo-kubelet-identity] as a workaround. You can pre-create a user-assigned identity, add it to the Azure AD group, then use the identity as the kubelet identity to create an AKS cluster. This ensures the identity is added to the Azure AD group before a token is generated by kubelet, which avoids the latency issue.
+ > [!NOTE] > This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][image-pull-secret].
nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
[ps-detach]: /powershell/module/az.aks/set-azakscluster#-acrnametodetach [cli-param]: /cli/azure/aks#az-aks-update-optional-parameters [ps-attach]: /powershell/module/az.aks/set-azakscluster#-acrnametoattach
+[byo-kubelet-identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Application Insights provides complete monitoring of applications running on AKS
- [Java](../azure-monitor/app/java-in-process-agent.md) - [Node.js](../azure-monitor/app/nodejs.md) - [Python](../azure-monitor/app/opencensus-python.md)-- [Other platforms](../azure-monitor/app/platforms.md)
+- [Other platforms](../azure-monitor/app/app-insights-overview.md#supported-languages)
See [What is Application Insights?](../azure-monitor/app/app-insights-overview.md)
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
As part of the application and cluster lifecycle, you may want to upgrade to the
In this tutorial, part seven of seven, you learn how to: > [!div class="checklist"]
+>
> * Identify current and available Kubernetes versions. > * Upgrade your Kubernetes nodes. > * Validate a successful upgrade.
In this tutorial, part seven of seven, you learn how to:
In previous tutorials, an application was packaged into a container image, and this container image was uploaded to Azure Container Registry (ACR). You also created an AKS cluster. The application was then deployed to the AKS cluster. If you have not done these steps and would like to follow along, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
-* If you're using Azure CLI, this article requires that you're running Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure CLI, this tutorial requires that you're running Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-* If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Get available cluster versions
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
Please execute the following commands prior to creating a cluster:
```azurecli az extension add --name aks-preview az extension update --name aks-preview
- az feature register --namespace Microsoft.ContainerService --name AKSWindows2022Preview
az feature register --namespace Microsoft.ContainerService --name WindowsNetworkPolicyPreview az provider register -n Microsoft.ContainerService ```
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
Title: Use system node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage system node pools in Azure Kubernetes Service (AKS) Previously updated : 06/18/2020 Last updated : 11/22/2022
You need the Azure PowerShell version 7.5.0 or later installed and configured. R
The following limitations apply when you create and manage AKS clusters that support system node pools. * See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions].
-* The AKS cluster must be built with virtual machine scale sets as the VM type and the *Standard* SKU load balancer.
-* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools, the length must be between 1 and 12 characters. For Windows node pools, the length must be between 1 and 6 characters.
+* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools, the length must be between 1 and 12 characters. For Windows node pools, the length must be between one and six characters.
* An API version of 2020-03-01 or greater must be used to set a node pool mode. Clusters created on API versions older than 2020-03-01 contain only user node pools, but can be migrated to contain system node pools by following [update pool mode steps](#update-existing-cluster-system-and-user-node-pools). * The mode of a node pool is a required property and must be explicitly set when using ARM templates or direct API calls. ## System and user node pools
-For a system node pool, AKS automatically assigns the label **kubernetes.azure.com/mode: system** to its nodes. This causes AKS to prefer scheduling system pods on node pools that contain this label. This label does not prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
+For a system node pool, AKS automatically assigns the label **kubernetes.azure.com/mode: system** to its nodes. This causes AKS to prefer scheduling system pods on node pools that contain this label. This label doesn't prevent you from scheduling application pods on system node pools. However, we recommend you isolate critical system pods from your application pods to prevent misconfigured or rogue application pods from accidentally killing system pods.
You can enforce this behavior by creating a dedicated system node pool. Use the `CriticalAddonsOnly=true:NoSchedule` taint to prevent application pods from being scheduled on system node pools. System node pools have the following restrictions:
System node pools have the following restrictions:
* System pools osType must be Linux. * User node pools osType may be Linux or Windows. * System pools must contain at least one node, and user node pools may contain zero or more nodes.
-* System node pools require a VM SKU of at least 2 vCPUs and 4GB memory. But burstable-VM(B series) is not recommended.
-* A minimum of two nodes 4 vCPUs is recommended(e.g. Standard_DS4_v2), especially for large clusters (Multiple CoreDNS Pod replicas, 3-4+ add-ons, etc.).
+* System node pools require a VM SKU of at least 2 vCPUs and 4 GB memory. But burstable-VM(B series) isn't recommended.
+* A minimum of two nodes 4 vCPUs is recommended (for example, Standard_DS4_v2), especially for large clusters (Multiple CoreDNS Pod replicas, 3-4+ add-ons, etc.).
* System node pools must support at least 30 pods as described by the [minimum and maximum value formula for pods][maximum-pods]. * Spot node pools require user node pools.
-* Adding an additional system node pool or changing which node pool is a system node pool will *NOT* automatically move system pods. System pods can continue to run on the same node pool even if you change it to a user node pool. If you delete or scale down a node pool running system pods that was previously a system node pool, those system pods are redeployed with preferred scheduling to the new system node pool.
+* Adding another system node pool or changing which node pool is a system node pool *does not* automatically move system pods. System pods can continue to run on the same node pool, even if you change it to a user node pool. If you delete or scale down a node pool running system pods that were previously a system node pool, those system pods are redeployed with preferred scheduling to the new system node pool.
You can do the following operations with node pools:
The following example creates a resource group named *myResourceGroup* in the *e
az group create --name myResourceGroup --location eastus ```
-Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you are using system node pools with at least three nodes. This operation may take several minutes to complete.
+Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you're using system node pools with at least three nodes. This operation may take several minutes to complete.
```azurecli-interactive # Create a new AKS cluster with a single system pool
The following example creates a resource group named *myResourceGroup* in the *e
New-AzResourceGroup -ResourceGroupName myResourceGroup -Location eastus ```
-Use the [New-AzAksCluster][new-azakscluster] cmdlet to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you are using system node pools with at least three nodes. This operation may take several minutes to complete.
+Use the [New-AzAksCluster][new-azakscluster] cmdlet to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one dedicated system pool containing one node. For your production workloads, ensure you're using system node pools with at least three nodes. The create operation may take several minutes to complete.
```azurepowershell-interactive # Create a new AKS cluster with a single system pool
az aks nodepool add \
### [Azure PowerShell](#tab/azure-powershell)
-You can add one or more system node pools to existing AKS clusters. It's recommended to schedule your application pods on user node pools, and dedicate system node pools to only critical system pods. This prevents rogue application pods from accidentally killing system pods. Enforce this behavior with the `CriticalAddonsOnly=true:NoSchedule` [taint][aks-taints] for your system node pools.
+You can add one or more system node pools to existing AKS clusters. It's recommended to schedule your application pods on user node pools, and dedicate system node pools to only critical system pods. Adding more system node pools prevents rogue application pods from accidentally killing system pods. Enforce the behavior with the `CriticalAddonsOnly=true:NoSchedule` [taint][aks-taints] for your system node pools.
The following command adds a dedicated node pool of mode type system with a default count of three nodes.
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
To fix this error:
1. Move Windows pods from existing Windows agent pools to new Windows agent pools. 1. Delete old Windows agent pools.
+## Why is there an unexpected user named "sshd" on my VM node?
+
+AKS adds a user named "sshd" when installing the OpenSSH service. This user is not malicious. We recommend that customers update their alerts to ignore this unexpected user account.
+ ## How do I rotate the service principal for my Windows node pool? Windows node pools do not support service principal rotation. To update the service principal, create a new Windows node pool and migrate your pods from the older pool to the new one. After your pods are migrated to the new pool, delete the older node pool.
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
az provider register --namespace Microsoft.ContainerService
Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. The following example creates a cluster named *myAKSCluster* with one node in the *myResourceGroup*: ```azurecli-interactive
+az group create --name myResourceGroup --location eastus
+ az aks create -g myResourceGroup -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys ```
You can retrieve this information using the Azure CLI command: [az keyvault list
1. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity. ```azurecli
- az account set --subscription "subscriptionID"
- ```
+ export SUBSCRIPTION_ID="$(az account show --query id --output tsv)"
+ export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
+ export RG_NAME="myResourceGroup"
+ export LOCATION="eastus"
- ```azurecli
- az identity create --name "userAssignedIdentityName" --resource-group "resourceGroupName" --location "location" --subscription "subscriptionID"
+ az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --location "${LOCATION}" --subscription "${SUBSCRIPTION_ID}"
``` 2. Set an access policy for the managed identity to access secrets in your Key Vault by running the following commands:
- ```bash
- export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "resourceGroupName" --name "userAssignedIdentityName" --query 'clientId' -otsv)"
- ```
- ```azurecli
- az keyvault set-policy --name "keyVaultName" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}"
+ export RG_NAME="myResourceGroup"
+ export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
+ export KEYVAULT_NAME="myKeyVault"
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RG_NAME}" --name "${USER_ASSIGNED_IDENTITY_NAME}" --query 'clientId' -otsv)"
+
+ az keyvault set-policy --name "${KEYVAULT_NAME}" --secret-permissions get --spn "${USER_ASSIGNED_CLIENT_ID}"
``` ## Create Kubernetes service account
You can retrieve this information using the Azure CLI command: [az keyvault list
Create a Kubernetes service account and annotate it with the client ID of the managed identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the values for the cluster name and the resource group name. ```azurecli
-az aks get-credentials -n myAKSCluster -g MyResourceGroup
+az aks get-credentials -n myAKSCluster -g myResourceGroup
```
-Copy and paste the following multi-line input in the Azure CLI, and update the values for `serviceAccountName` and `serviceAccountNamespace` with the Kubernetes service account name and its namespace.
+Copy and paste the following multi-line input in the Azure CLI, and update the values for `SERVICE_ACCOUNT_NAME` and `SERVICE_ACCOUNT_NAMESPACE` with the Kubernetes service account name and its namespace.
```bash
+export SERVICE_ACCOUNT_NAME="workload-identity-sa"
+export SERVICE_ACCOUNT_NAMESPACE="my-namespace"
+ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount metadata: annotations:
- azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID}
+ azure.workload.identity/client-id: "${USER_ASSIGNED_CLIENT_ID}"
labels: azure.workload.identity/use: "true"
- name: serviceAccountName
- namespace: serviceAccountNamspace
+ name: "${SERVICE_ACCOUNT_NAME}"
+ namespace: "${SERVICE_ACCOUNT_NAMESPACE}"
EOF ```
Serviceaccount/workload-identity-sa created
## Establish federated identity credential
-Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. Replace the values `resourceGroupName`, `userAssignedIdentityName`, `federatedIdentityName`, `serviceAccountNamespace`, and `serviceAccountName`.
+Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject.
```azurecli
-az identity federated-credential create --name federatedIdentityName --identity-name userAssignedIdentityName --resource-group resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:serviceAccountNamespace:serviceAccountName
+az identity federated-credential create --name myfederatedIdentity --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RG_NAME}" --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}"
``` > [!NOTE]
api-management Api Management Howto Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache.md
APIs and operations in API Management can be configured with response caching. Response caching can significantly reduce latency for API callers and backend load for API providers. > [!IMPORTANT]
-> Built-in cache is volatile and is shared by all units in the same region in the same API Management service.
-
+> Built-in cache is volatile and is shared by all units in the same region in the same API Management service. Regardless of the cache type being used (internal or external), if the cache-related operations fail to connect to the cache due to the volatility of the cache or any other reason, the API call that uses the cache related operation doesn't raise an error, and the cache operation completes successfully. In the case of a read operation, a null value is returned to the calling policy expression. Your policy code should be designed to ensure that that there's a "fallback" mechanism to retrieve data not found in the cache.
For more detailed information about caching, see [API Management caching policies](api-management-caching-policies.md) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md). ![cache policies](media/api-management-howto-cache/cache-policies.png)
app-service Configure Language Dotnet Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnet-framework.md
Last updated 06/02/2020
# Configure an ASP.NET app for Azure App Service > [!NOTE]
-> For ASP.NET Core, see [Configure an ASP.NET Core app for Azure App Service](configure-language-dotnetcore.md)
+> For ASP.NET Core, see [Configure an ASP.NET Core app for Azure App Service](configure-language-dotnetcore.md). If your ASP.NET app runs in a custom Windows or Linux container, see [Configure a custom container for Azure App Service](configure-custom-container.md).
ASP.NET apps must be deployed to Azure App Service as compiled binaries. The Visual Studio publishing tool builds the solution and then deploys the compiled binaries directly, whereas the App Service deployment engine deploys the code repository first and then compiles the binaries.
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
zone_pivot_groups: app-service-platform-windows-linux
# Configure an ASP.NET Core app for Azure App Service > [!NOTE]
-> For ASP.NET in .NET Framework, see [Configure an ASP.NET app for Azure App Service](configure-language-dotnet-framework.md)
+> For ASP.NET in .NET Framework, see [Configure an ASP.NET app for Azure App Service](configure-language-dotnet-framework.md). If your ASP.NET Core app runs in a custom Windows or Linux container, see [Configure a custom container for Azure App Service](configure-custom-container.md).
ASP.NET Core apps must be deployed to Azure App Service as compiled binaries. The Visual Studio publishing tool builds the solution and then deploys the compiled binaries directly, whereas the App Service deployment engine deploys the code repository first and then compiles the binaries.
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
target cross-platform with .NET 6.0.
In this quickstart, you'll learn how to create and deploy your first ASP.NET web app to [Azure App Service](overview.md). App Service supports various versions of .NET apps, and provides a highly scalable, self-patching web hosting service. ASP.NET web apps are cross-platform and can be hosted on Linux or Windows. When you're finished, you'll have an Azure resource group consisting of an App Service hosting plan and an App Service with a deployed web application.
+Alternatively, you can deploy an ASP.NET web app as part of a [Windows or Linux container in App Service](quickstart-custom-container.md).
+ ## Prerequisites :::zone target="docs" pivot="development-environment-vs"
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
Title: "Diagnose connection issues for Azure Arc-enabled Kubernetes clusters" Previously updated : 11/10/2022 Last updated : 11/22/2022 description: "Learn how to resolve common issues when connecting Kubernetes clusters to Azure Arc."
When you [create your support request](../../azure-portal/supportability/how-to-
If you are using a proxy server on at least one machine, complete the first five steps of the non-proxy flowchart (through resource provider registration) for basic troubleshooting steps. Then, if you are still encountering issues, review the next flowchart for additional troubleshooting steps. More details about each step are provided below. ### Is the machine executing commands behind a proxy server?
-If the machine is executing commands behind a proxy server, you'll need to set any necessary environment variables, [explained below](#set-environment-variables).
-
-### Set environment variables
-
-Be sure you have set all of the necessary environment variables. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
+If the machine is executing commands behind a proxy server, you'll need to set all of the necessary environment variables. For more information, see [Connect using an outbound proxy server](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server).
For example: ```bash
-export HTTP_PROXY=ΓÇ£http://<proxyIP>:<proxyPort>ΓÇ¥
-export HTTPS_PROXY=ΓÇ£https://<proxyIP>:<proxyPort>ΓÇ¥
-export NO_PROXY=ΓÇ£<service CIDR>,Kubernetes.default.svc,.svc.cluster.local,.svcΓÇ¥
+export HTTP_PROXY="http://<proxyIP>:<proxyPort>"
+export HTTPS_PROXY="https://<proxyIP>:<proxyPort>"
+export NO_PROXY="<cluster-apiserver-ip-address>:<proxyPort>"
``` ### Does the proxy server only accept trusted certificates?
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| Plan | Scale out | Max # instances | | | | | | **[Consumption plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of incoming trigger events. | **Windows:** 200<br/>**Linux:** 100<sup>1</sup> |
-| **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-40<sup>2</sup>|
+| **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-100<sup>2</sup>|
| **[Dedicated plan]**<sup>3</sup> | Manual/autoscale |10-20| | **[ASE][Dedicated plan]**<sup>3</sup> | Manual/autoscale |100 | | **[Kubernetes]** | Event-driven autoscale for Kubernetes clusters using [KEDA](https://keda.sh). | Varies&nbsp;by&nbsp;cluster&nbsp;&nbsp;| <sup>1</sup> During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
-<sup>2</sup> In some regions, Linux apps on a Premium plan can scale to 40 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
+<sup>2</sup> In some regions, Linux apps on a Premium plan can scale to 100 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
<sup>3</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits). ## Cold start behavior
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows portal blade to deploy custom template.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows the Azure portal with template entered in the search box and Deploy a custom template highlighted in the search results.":::
2. Click **Build your own template in the editor**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal blade to build template in the editor.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal screen to build template in the editor.":::
3. Paste the Resource Manager template below into the editor and then click **Save**. You don't need to modify this template since you will provide values for its parameters.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows portal screen to edit Resource Manager template.":::
```json {
A [data collection endpoint (DCE)](../essentials/data-collection-endpoint-overvi
4. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values a **Name** for the data collection endpoint. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection endpoint.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="../logs/media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows portal blade to edit custom deployment values for data collection endpoint.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="../logs/media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot that shows portal screen to edit custom deployment values for data collection endpoint.":::
5. Click **Review + create** and then **Create** when you review the details. 6. Once the DCE is created, select it so you can view its properties. Note the **Logs ingestion URI** since you'll need this in a later step.
- :::image type="content" source="../logs/media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" lightbox="../logs/media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" alt-text="Screenshot that shows portal blade with details of data collection endpoint uri.":::
+ :::image type="content" source="../logs/media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" lightbox="../logs/media/tutorial-logs-ingestion-api/data-collection-endpoint-overview.png" alt-text="Screenshot that shows the DCE Overview pane in the portal with details of data collection endpoint uri.":::
7. Click **JSON View** to view other details for the DCE. Copy the **Resource ID** since you'll need this in a later step.
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows portal blade to deploy custom template.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows the Azure portal with template entered in the search box and Deploy a custom template highlighted in the search results.":::
2. Click **Build your own template in the editor**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal blade to build template in the editor.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal screen to build template in the editor.":::
3. Paste one of the Resource Manager templates below into the editor and then change the following values:
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
4. Click **Save**.
- :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows portal blade to edit Resource Manager template.":::
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/edit-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/edit-template.png" alt-text="Screenshot that shows portal screen to edit Resource Manager template.":::
**Data collection rule for text log**
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
5. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the data collection rule and then provide values defined in the template. This includes a **Name** for the data collection rule and the **Workspace Resource ID** and **Endpoint Resource ID**. The **Location** should be the same location as the workspace. The **Region** will already be populated and is used for the location of the data collection rule.
- :::image type="content" source="media/data-collection-text-log/custom-deployment-values.png" lightbox="media/data-collection-text-log/custom-deployment-values.png" alt-text="Screenshot that shows portal blade to edit custom deployment values for data collection rule.":::
+ :::image type="content" source="media/data-collection-text-log/custom-deployment-values.png" lightbox="media/data-collection-text-log/custom-deployment-values.png" alt-text="Screenshot that shows the Custom Deployment screen in the portal to edit custom deployment values for data collection rule.":::
6. Click **Review + create** and then **Create** when you review the details. 7. When the deployment is complete, expand the **Deployment details** box and click on your data collection rule to view its details. Click **JSON View**.
- :::image type="content" source="media/data-collection-text-log/data-collection-rule-details.png" lightbox="media/data-collection-text-log/data-collection-rule-details.png" alt-text="Screenshot that shows portal blade with data collection rule details.":::
+ :::image type="content" source="media/data-collection-text-log/data-collection-rule-details.png" lightbox="media/data-collection-text-log/data-collection-rule-details.png" alt-text="Screenshot that shows the Overview pane in the portal with data collection rule details.":::
8. Change the API version to **2021-09-01-preview**.
The final step is to create a data collection association that associates the da
1. From the **Monitor** menu in the Azure portal, select **Data Collection Rules** and select the rule that you just created.
- :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows portal blade with data collection rules menu item.":::
+ :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows the Data Collection Rules pane in the portal with data collection rules menu item.":::
2. Select **Resources** and then click **Add** to view the available resources.
- :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows portal blade with resources for the data collection rule.":::
+ :::image type="content" source="media/data-collection-text-log/data-collection-rules.png" lightbox="media/data-collection-text-log/data-collection-rules.png" alt-text="Screenshot that shows the Data Collection Rules pane in the portal with resources for the data collection rule.":::
3. Select either individual agents to associate the data collection rule, or select a resource group to create an association for all agents in that resource group. Click **Apply**.
- :::image type="content" source="media/data-collection-text-log/select-resources.png" lightbox="media/data-collection-text-log/select-resources.png" alt-text="Screenshot that shows portal blade to add resources to the data collection rule.":::
+ :::image type="content" source="media/data-collection-text-log/select-resources.png" lightbox="media/data-collection-text-log/select-resources.png" alt-text="Screenshot that shows the Resources pane in the portal to add resources to the data collection rule.":::
## Troubleshooting - text logs Use the following steps to troubleshoot collection of text logs.
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
This section explains how to install the Log Analytics agent on different types
### Linux virtual machine on-premises or in another cloud - Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension. Review the [deployment options](../../azure-arc/servers/concept-log-analytics-extension-deployment.md) to understand the different deployment methods available for the extension on machines registered with Azure Arc-enabled servers.-- [Manually install](../vm/monitor-virtual-machine.md) the agent calling a wrapper-script hosted on GitHub.
+- [Manually install](../agents/agent-linux.md#install-the-agent) the agent calling a wrapper-script hosted on GitHub.
- Integrate [System Center Operations Manager](./om-agents.md) with Azure Monitor to forward collected data from Windows computers reporting to a management group. ## Data collected
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
## Manage log alerts using PowerShell [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-> [!NOTE]
-> PowerShell is not currently supported in API version `2021-08-01`.
Use the PowerShell cmdlets listed below to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules).
New-AzResourceGroupDeployment -Name AlertDeployment -ResourceGroupName ResourceG
* Learn about [log alerts](./alerts-unified-log.md). * Create log alerts using [Azure Resource Manager Templates](./alerts-log-create-templates.md). * Understand [webhook actions for log alerts](./alerts-log-webhook.md).
-* Learn more about [log queries](../logs/log-query-overview.md).
+* Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
If you can see a fired alert in the portal, but its configured action did not tr
1. **Are you calling Slack or Microsoft Teams?** Each of these endpoints expects a specific JSON format. Follow [these instructions](../alerts/action-groups-logic-app.md) to configure a logic app action instead.
- 1. **Did your webhook became unresponsive or returned errors?**
-
- Our timeout period for a webhook response is 10 seconds. The webhook call will be retried up to two additional times when the following HTTP status codes are returned: 408, 429, 503, 504, or when the HTTP endpoint does not respond. The first retry happens after 10 seconds. The second and final retry happens after 100 seconds. If the second retry fails, the endpoint will not be called again for 30 minutes for any action group.
+ 1. **Did your webhook become unresponsive or return errors?**
+
+ The webhook response timeout period is 10 seconds. When the HTTP endpoint does not respond or when the following HTTP status codes are returned, the webhook call is retried up to two times:
+
+ - `408`
+ - `429`
+ - `503`
+ - `504`
+
+ One retry occurs after 10 seconds and another retry occurs after 100 seconds. If the second retry fails, the endpoint is not called again for 15 minutes for any action group.
## Action or notification happened more than once
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
ServiceNow supported versions include San Diego, Rome, Quebec, Paris, Orlando,
ServiceNow admins must generate a client ID and client secret for their ServiceNow instance. See the following information as required:
+- [Set up OAuth for Tokyo](https://docs.servicenow.com/bundle/tokyo-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
- [Set up OAuth for San Diego](https://docs.servicenow.com/bundle/sandiego-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Rome](https://docs.servicenow.com/bundle/rome-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Quebec](https://docs.servicenow.com/bundle/quebec-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
Last updated 05/04/2017
[Application Insights](../app/app-insights-overview.md) automatically analyzes the performance of your web application, and can warn you about potential problems.
-This feature requires no special setup, other than configuring your app for Application Insights for your [supported language](../app/platforms.md). It's active when your app generates enough telemetry.
+This feature requires no special setup, other than configuring your app for Application Insights for your [supported language](../app/app-insights-overview.md#supported-languages). It's active when your app generates enough telemetry.
## When would I get a smart detection notification?
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
The core API is uniform across all platforms, apart from a few variations like `
| Method | Used for | | | |
-| [`TrackPageView`](#page-views) |Pages, screens, blades, or forms. |
+| [`TrackPageView`](#page-views) |Pages, screens, panes, or forms. |
| [`TrackEvent`](#trackevent) |User actions and other events. Used to track user behavior or to monitor performance. | | [`GetMetric`](#getmetric) |Zero and multidimensional metrics, centrally configured aggregation, C# only. | | [`TrackMetric`](#trackmetric) |Performance measurements such as queue lengths not related to specific events. |
The telemetry is available in the `customMetrics` table in [Application Insights
## Page views
-In a device or webpage app, page view telemetry is sent by default when each screen or page is loaded. But you can change the default to track page views at more or different times. For example, in an app that displays tabs or blades, you might want to track a page whenever the user opens a new blade.
+In a device or webpage app, page view telemetry is sent by default when each screen or page is loaded. But you can change the default to track page views at more or different times. For example, in an app that displays tabs or panes, you might want to track a page whenever the user opens a new pane.
User and session data is sent as properties along with page views, so the user and session charts come alive when there's page view telemetry.
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
cfg: { // Application Insights Configuration
</script> ```
-For a summary of the noncustom properties available on the telemetry item, see [Application Insights Export Data Model](./export-data-model.md).
+For a summary of the noncustom properties available on the telemetry item, see [Application Insights Export Data Model](./export-telemetry.md#application-insights-export-data-model).
You can add as many initializers as you like. They're called in the order that they're added.
public void Initialize(ITelemetry telemetry)
} ```
-#### Control the client IP address used for gelocation mappings
+#### Control the client IP address used for geolocation mappings
The following sample initializer sets the client IP which will be used for geolocation mapping, instead of the client socket IP address, during telemetry ingestion.
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Post coding questions to [Stack Overflow]() using an Application Insights tag.
### User Voice Leave product feedback for the engineering team on [UserVoice](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).+
+## Supported languages
+
+* [C#|VB (.NET)](./asp-net.md)
+* [Java](./java-in-process-agent.md)
+* [JavaScript](./javascript.md)
+* [Node.js](./nodejs.md)
+* [Python](./opencensus-python.md)
+
+### Supported platforms and frameworks
+
+Supported platforms and frameworks are listed here.
+
+#### Azure service integration (portal enablement, Azure Resource Manager deployments)
+* [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md)
+* [Azure App Service](./azure-web-apps.md)
+* [Azure Functions](../../azure-functions/functions-monitoring.md)
+* [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles
+
+#### Auto-instrumentation (enable without code changes)
+* [ASP.NET - for web apps hosted with IIS](./status-monitor-v2-overview.md)
+* [ASP.NET Core - for web apps hosted with IIS](./status-monitor-v2-overview.md)
+* [Java](./java-in-process-agent.md)
+
+#### Manual instrumentation / SDK (some code changes required)
+* [ASP.NET](./asp-net.md)
+* [ASP.NET Core](./asp-net-core.md)
+* [Node.js](./nodejs.md)
+* [Python](./opencensus-python.md)
+* [JavaScript - web](./javascript.md)
+ * [React](./javascript-react-plugin.md)
+ * [React Native](./javascript-react-native-plugin.md)
+ * [Angular](./javascript-angular-plugin.md)
+* [Windows desktop applications, services, and worker roles](./windows-desktop.md)
+* [Universal Windows app](../app/mobile-center-quickstart.md) (App Center)
+* [Android](../app/mobile-center-quickstart.md) (App Center)
+* [iOS](../app/mobile-center-quickstart.md) (App Center)
+
+> [!NOTE]
+> OpenTelemetry-based instrumentation is available in preview for [C#, Node.js, and Python](opentelemetry-enable.md). Review the limitations noted at the beginning of each language's official documentation. If you require a full-feature experience, use the existing Application Insights SDKs.
+
+### Logging frameworks
+* [ILogger](./ilogger.md)
+* [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md)
+* [Log4J, Logback, or java.util.logging](./java-in-process-agent.md#autocollected-logs)
+* [LogStash plug-in](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights)
+* [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms)
+
+### Export and data analysis
+* [Power BI](https://powerbi.microsoft.com/blog/explore-your-application-insights-data-with-power-bi/)
+* [Power BI for workspace-based resources](../logs/log-powerbi.md)
+
+### Unsupported SDKs
+Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when you use the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
If you're having trouble getting Application Map to work as expected, try these
1. Make sure you're using an officially supported SDK. Unsupported or community SDKs might not support correlation.
- For a list of supported SDKs, see [Application Insights: Languages, platforms, and integrations](./platforms.md).
+ For a list of supported SDKs, see [Application Insights: Languages, platforms, and integrations](./app-insights-overview.md#supported-languages).
1. Upgrade all components to the latest SDK version.
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
A list of the latest [currently-supported modules](https://github.com/microsoft/
- Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md). - [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) - See [data model](./data-model.md) for Application Insights types and data model.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
You can also set the cloud role name via environment variable or system property
- Write [custom telemetry](../../azure-monitor/app/api-custom-events-metrics.md). - For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md). - Learn more about [setting cloud_RoleName](./app-map.md#set-or-override-cloud-role-name) for other SDKs.-- Onboard all components of your microservice on Application Insights. Check out the [supported platforms](./platforms.md).
+- Onboard all components of your microservice on Application Insights. Check out the [supported platforms](./app-insights-overview.md#supported-languages).
- See the [data model](./data-model.md) for Application Insights types. - Learn how to [extend and filter telemetry](./api-filtering-sampling.md). - Review the [Application Insights config reference](configuration-with-applicationinsights-config.md).
azure-monitor Data Model Dependency Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-dependency-telemetry.md
Indication of successful or unsuccessful call.
- Set up dependency tracking for [Java](./java-in-process-agent.md). - [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) - See [data model](data-model.md) for Application Insights types and data model.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Event Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-event-telemetry.md
Max length: 512 characters
- See [data model](data-model.md) for Application Insights types and data model. - [Write custom event telemetry](./api-custom-events-metrics.md#trackevent)-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Exception Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-exception-telemetry.md
Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`,
- See [data model](data-model.md) for Application Insights types and data model. - Learn how to [diagnose exceptions in your web apps with Application Insights](./asp-net-exceptions.md).-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Metric Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-metric-telemetry.md
Metric with the custom property `CustomPerfCounter` set to `true` indicate that
- Learn how to use [Application Insights API for custom events and metrics](./api-custom-events-metrics.md#trackmetric). - See [data model](data-model.md) for Application Insights types and data model.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Request Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md
You can read more on request result code and status code in the [blog post](http
- [Write custom request telemetry](./api-custom-events-metrics.md#trackrequest) - See [data model](data-model.md) for Application Insights types and data model. - Learn how to [configure ASP.NET Core](./asp-net.md) application with Application Insights.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model Trace Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-trace-telemetry.md
Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`,
- [Explore Java trace logs in Application Insights](./java-in-process-agent.md#autocollected-logs). - See [data model](data-model.md) for Application Insights types and data model. - [Write custom trace telemetry](./api-custom-events-metrics.md#tracktrace)-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model.md
To report data model or schema problems and suggestions, use our [GitHub reposit
- [Write custom telemetry](./api-custom-events-metrics.md). - Learn how to [extend and filter telemetry](./api-filtering-sampling.md). - Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model.-- Check out [platforms](./platforms.md) supported by Application Insights.
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
This product includes GeoLite2 data created by [MaxMind](https://www.maxmind.com
[config]: ./configuration-with-applicationinsights-config.md [greenbrown]: ./asp-net.md [java]: ./java-in-process-agent.md
-[platforms]: ./platforms.md
+[platforms]: ./app-insights-overview.md#supported-languages
[pricing]: https://azure.microsoft.com/pricing/details/application-insights/ [redfield]: ./status-monitor-v2-overview.md [start]: ./app-insights-overview.md
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/devops.md
When an alert is raised, Application Insights can automatically create a work it
Getting started with Application Insights is easy. The main options are: * [IIS servers](./status-monitor-v2-overview.md)
-* Instrument your project during development. You can do this for [ASP.NET](./asp-net.md) or [Java](./java-in-process-agent.md) apps, and [Node.js](./nodejs.md) and a host of [other types](./platforms.md).
+* Instrument your project during development. You can do this for [ASP.NET](./asp-net.md) or [Java](./java-in-process-agent.md) apps, and [Node.js](./nodejs.md) and a host of [other types](./app-insights-overview.md#supported-languages).
* Instrument [any web page](./javascript.md) by adding a short code snippet.
azure-monitor Export Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-data-model.md
- Title: Azure Application Insights Data Model | Microsoft Docs
-description: Describes properties exported from continuous export in JSON, and used as filters.
- Previously updated : 01/08/2019---
-# Application Insights Export Data Model
-This table lists the properties of telemetry sent from the [Application Insights](./app-insights-overview.md) SDKs to the portal.
-You'll see these properties in data output from [Continuous Export](export-telemetry.md).
-They also appear in property filters in [Metric Explorer](../essentials/metrics-charts.md) and [Diagnostic Search](./diagnostic-search.md).
-
-Points to note:
-
-* `[0]` in these tables denotes a point in the path where you have to insert an index; but it isn't always 0.
-* Time durations are in tenths of a microsecond, so 10000000 == 1 second.
-* Dates and times are UTC, and are given in the ISO format `yyyy-MM-DDThh:mm:ss.sssZ`
-
-## Example
-
-```json
-// A server report about an HTTP request
-{
- "request": [
- {
- "urlData": { // derived from 'url'
- "host": "contoso.org",
- "base": "/",
- "hashTag": ""
- },
- "responseCode": 200, // Sent to client
- "success": true, // Default == responseCode<400
- // Request id becomes the operation id of child events
- "id": "fCOhCdCnZ9I=",
- "name": "GET Home/Index",
- "count": 1, // 100% / sampling rate
- "durationMetric": {
- "value": 1046804.0, // 10000000 == 1 second
- // Currently the following fields are redundant:
- "count": 1.0,
- "min": 1046804.0,
- "max": 1046804.0,
- "stdDev": 0.0,
- "sampledValue": 1046804.0
- },
- "url": "/"
- }
- ],
- "internal": {
- "data": {
- "id": "7f156650-ef4c-11e5-8453-3f984b167d05",
- "documentVersion": "1.61"
- }
- },
- "context": {
- "device": { // client browser
- "type": "PC",
- "screenResolution": { },
- "roleInstance": "WFWEB14B.fabrikam.net"
- },
- "application": { },
- "location": { // derived from client ip
- "continent": "North America",
- "country": "United States",
- // last octagon is anonymized to 0 at portal:
- "clientip": "168.62.177.0",
- "province": "",
- "city": ""
- },
- "data": {
- "isSynthetic": true, // we identified source as a bot
- // percentage of generated data sent to portal:
- "samplingRate": 100.0,
- "eventTime": "2016-03-21T10:05:45.7334717Z" // UTC
- },
- "user": {
- "isAuthenticated": false,
- "anonId": "us-tx-sn1-azr", // bot agent id
- "anonAcquisitionDate": "0001-01-01T00:00:00Z",
- "authAcquisitionDate": "0001-01-01T00:00:00Z",
- "accountAcquisitionDate": "0001-01-01T00:00:00Z"
- },
- "operation": {
- "id": "fCOhCdCnZ9I=",
- "parentId": "fCOhCdCnZ9I=",
- "name": "GET Home/Index"
- },
- "cloud": { },
- "serverDevice": { },
- "custom": { // set by custom fields of track calls
- "dimensions": [ ],
- "metrics": [ ]
- },
- "session": {
- "id": "65504c10-44a6-489e-b9dc-94184eb00d86",
- "isFirst": true
- }
- }
-}
-```
-
-## Context
-All types of telemetry are accompanied by a context section. Not all of these fields are transmitted with every data point.
-
-| Path | Type | Notes |
-| | | |
-| context.custom.dimensions [0] |object [ ] |Key-value string pairs set by custom properties parameter. Key max length 100, values max length 1024. More than 100 unique values, the property can be searched but cannot be used for segmentation. Max 200 keys per ikey. |
-| context.custom.metrics [0] |object [ ] |Key-value pairs set by custom measurements parameter and by TrackMetrics. Key max length 100, values may be numeric. |
-| context.data.eventTime |string |UTC |
-| context.data.isSynthetic |boolean |Request appears to come from a bot or web test. |
-| context.data.samplingRate |number |Percentage of telemetry generated by the SDK that is sent to portal. Range 0.0-100.0. |
-| context.device |object |Client device |
-| context.device.browser |string |IE, Chrome, ... |
-| context.device.browserVersion |string |Chrome 48.0, ... |
-| context.device.deviceModel |string | |
-| context.device.deviceName |string | |
-| context.device.id |string | |
-| context.device.locale |string |en-GB, de-DE, ... |
-| context.device.network |string | |
-| context.device.oemName |string | |
-| context.device.os |string | |
-| context.device.osVersion |string |Host OS |
-| context.device.roleInstance |string |ID of server host |
-| context.device.roleName |string | |
-| context.device.screenResolution |string | |
-| context.device.type |string |PC, Browser, ... |
-| context.location |object |Derived from `clientip`. |
-| context.location.city |string |Derived from `clientip`, if known |
-| context.location.clientip |string |Last octagon is anonymized to 0. |
-| context.location.continent |string | |
-| context.location.country |string | |
-| context.location.province |string |State or province |
-| context.operation.id |string |Items that have the same `operation id` are shown as Related Items in the portal. Usually the `request id`. |
-| context.operation.name |string |url or request name |
-| context.operation.parentId |string |Allows nested related items. |
-| context.session.id |string |`Id` of a group of operations from the same source. A period of 30 minutes without an operation signals the end of a session. |
-| context.session.isFirst |boolean | |
-| context.user.accountAcquisitionDate |string | |
-| context.user.accountId |string | |
-| context.user.anonAcquisitionDate |string | |
-| context.user.anonId |string | |
-| context.user.authAcquisitionDate |string |[Authenticated User](./api-custom-events-metrics.md#authenticated-users) |
-| context.user.authId |string | |
-| context.user.isAuthenticated |boolean | |
-| context.user.storeRegion |string | |
-| internal.data.documentVersion |string | |
-| internal.data.id |string | `Unique id` that is assigned when an item is ingested to Application Insights |
-
-## Events
-Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
-
-| Path | Type | Notes |
-| | | |
-| event [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| event [0] name |string |Event name. Max length 250. |
-| event [0] url |string | |
-| event [0] urlData.base |string | |
-| event [0] urlData.host |string | |
-
-## Exceptions
-Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser.
-
-| Path | Type | Notes |
-| | | |
-| basicException [0] assembly |string | |
-| basicException [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| basicException [0] exceptionGroup |string | |
-| basicException [0] exceptionType |string | |
-| basicException [0] failedUserCodeMethod |string | |
-| basicException [0] failedUserCodeAssembly |string | |
-| basicException [0] handledAt |string | |
-| basicException [0] hasFullStack |boolean | |
-| basicException [0] `id` |string | |
-| basicException [0] method |string | |
-| basicException [0] message |string |Exception message. Max length 10k. |
-| basicException [0] outerExceptionMessage |string | |
-| basicException [0] outerExceptionThrownAtAssembly |string | |
-| basicException [0] outerExceptionThrownAtMethod |string | |
-| basicException [0] outerExceptionType |string | |
-| basicException [0] outerId |string | |
-| basicException [0] parsedStack [0] assembly |string | |
-| basicException [0] parsedStack [0] fileName |string | |
-| basicException [0] parsedStack [0] level |integer | |
-| basicException [0] parsedStack [0] line |integer | |
-| basicException [0] parsedStack [0] method |string | |
-| basicException [0] stack |string |Max length 10k |
-| basicException [0] typeName |string | |
-
-## Trace Messages
-Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md).
-
-| Path | Type | Notes |
-| | | |
-| message [0] loggerName |string | |
-| message [0] parameters |string | |
-| message [0] raw |string |The log message, max length 10k. |
-| message [0] severityLevel |string | |
-
-## Remote dependency
-Sent by TrackDependency. Used to report performance and usage of [calls to dependencies](./asp-net-dependencies.md) in the server, and AJAX calls in the browser.
-
-| Path | Type | Notes |
-| | | |
-| remoteDependency [0] async |boolean | |
-| remoteDependency [0] baseName |string | |
-| remoteDependency [0] commandName |string |For example "home/index" |
-| remoteDependency [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| remoteDependency [0] dependencyTypeName |string |HTTP, SQL, ... |
-| remoteDependency [0] durationMetric.value |number |Time from call to completion of response by dependency |
-| remoteDependency [0] `id` |string | |
-| remoteDependency [0] name |string |Url. Max length 250. |
-| remoteDependency [0] resultCode |string |from HTTP dependency |
-| remoteDependency [0] success |boolean | |
-| remoteDependency [0] type |string |Http, Sql,... |
-| remoteDependency [0] url |string |Max length 2000 |
-| remoteDependency [0] urlData.base |string |Max length 2000 |
-| remoteDependency [0] urlData.hashTag |string | |
-| remoteDependency [0] urlData.host |string |Max length 200 |
-
-## Requests
-Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use this to reports server response time, measured at the server.
-
-| Path | Type | Notes |
-| | | |
-| request [0] count |integer |100/([sampling](./sampling.md) rate). For example: 4 =&gt; 25%. |
-| request [0] durationMetric.value |number |Time from request arriving to response. 1e7 == 1s |
-| request [0] `id` |string |`Operation id` |
-| request [0] name |string |GET/POST + url base. Max length 250 |
-| request [0] responseCode |integer |HTTP response sent to client |
-| request [0] success |boolean |Default == (responseCode &lt; 400) |
-| request [0] url |string |Not including host |
-| request [0] urlData.base |string | |
-| request [0] urlData.hashTag |string | |
-| request [0] urlData.host |string | |
-
-## Page View Performance
-Sent by the browser. Measures the time to process a page, from user initiating the request to display complete (excluding async AJAX calls).
-
-Context values show client OS and browser version.
-
-| Path | Type | Notes |
-| | | |
-| clientPerformance [0] clientProcess.value |integer |Time from end of receiving the HTML to displaying the page. |
-| clientPerformance [0] name |string | |
-| clientPerformance [0] networkConnection.value |integer |Time taken to establish a network connection. |
-| clientPerformance [0] receiveRequest.value |integer |Time from end of sending the request to receiving the HTML in reply. |
-| clientPerformance [0] sendRequest.value |integer |Time from taken to send the HTTP request. |
-| clientPerformance [0] total.value |integer |Time from starting to send the request to displaying the page. |
-| clientPerformance [0] url |string |URL of this request |
-| clientPerformance [0] urlData.base |string | |
-| clientPerformance [0] urlData.hashTag |string | |
-| clientPerformance [0] urlData.host |string | |
-| clientPerformance [0] urlData.protocol |string | |
-
-## Page Views
-Sent by trackPageView() or [stopTrackPage](./api-custom-events-metrics.md#page-views)
-
-| Path | Type | Notes |
-| | | |
-| view [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| view [0] durationMetric.value |integer |Value optionally set in trackPageView() or by startTrackPage() - stopTrackPage(). Not the same as clientPerformance values. |
-| view [0] name |string |Page title. Max length 250 |
-| view [0] url |string | |
-| view [0] urlData.base |string | |
-| view [0] urlData.hashTag |string | |
-| view [0] urlData.host |string | |
-
-## Availability
-Reports [availability web tests](./monitor-web-app-availability.md).
-
-| Path | Type | Notes |
-| | | |
-| availability [0] availabilityMetric.name |string |availability |
-| availability [0] availabilityMetric.value |number |1.0 or 0.0 |
-| availability [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
-| availability [0] dataSizeMetric.name |string | |
-| availability [0] dataSizeMetric.value |integer | |
-| availability [0] durationMetric.name |string | |
-| availability [0] durationMetric.value |number |Duration of test. 1e7==1s |
-| availability [0] message |string |Failure diagnostic |
-| availability [0] result |string |Pass/Fail |
-| availability [0] runLocation |string |Geo source of http req |
-| availability [0] testName |string | |
-| availability [0] testRunId |string | |
-| availability [0] testTimestamp |string | |
-
-## Metrics
-Generated by TrackMetric().
-
-The metric value is found in context.custom.metrics[0]
-
-For example:
-
-```json
-{
- "metric": [ ],
- "context": {
- ...
- "custom": {
- "dimensions": [
- { "ProcessId": "4068" }
- ],
- "metrics": [
- {
- "dispatchRate": {
- "value": 0.001295,
- "count": 1.0,
- "min": 0.001295,
- "max": 0.001295,
- "stdDev": 0.0,
- "sampledValue": 0.001295,
- "sum": 0.001295
- }
- }
- ]
- }
- }
-}
-```
-
-## About metric values
-Metric values, both in metric reports and elsewhere, are reported with a standard object structure. For example:
-
-```json
-"durationMetric": {
- "name": "contoso.org",
- "type": "Aggregation",
- "value": 468.71603053650279,
- "count": 1.0,
- "min": 468.71603053650279,
- "max": 468.71603053650279,
- "stdDev": 0.0,
- "sampledValue": 468.71603053650279
-}
-```
-
-Currently - though this might change in the future - in all values reported from the standard SDK modules, `count==1` and only the `name` and `value` fields are useful. The only case where they would be different would be if you write your own TrackMetric calls in which you set the other parameters.
-
-The purpose of the other fields is to allow metrics to be aggregated in the SDK, to reduce traffic to the portal. For example, you could average several successive readings before sending each metric report. Then you would calculate the min, max, standard deviation and aggregate value (sum or average) and set count to the number of readings represented by the report.
-
-In the tables above, we have omitted the rarely used fields count, min, max, stdDev, and sampledValue.
-
-Instead of pre-aggregating metrics, you can use [sampling](./sampling.md) if you need to reduce the volume of telemetry.
-
-### Durations
-Except where otherwise noted, durations are represented in tenths of a microsecond, so that 10000000.0 means 1 second.
-
-## See also
-* [Application Insights](./app-insights-overview.md)
-* [Continuous Export](export-telemetry.md)
-* [Code samples](export-telemetry.md#code-samples)
-
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
After the first export is finished, you'll find the following structure in your
|Name | Description | |:-|:|
-| [Availability](export-data-model.md#availability) | Reports [availability web tests](./monitor-web-app-availability.md). |
-| [Event](export-data-model.md#events) | Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
-| [Exceptions](export-data-model.md#exceptions) |Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser.
-| [Messages](export-data-model.md#trace-messages) | Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md).
-| [Metrics](export-data-model.md#metrics) | Generated by metric API calls.
-| [PerformanceCounters](export-data-model.md) | Performance Counters collected by Application Insights.
-| [Requests](export-data-model.md#requests)| Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use requests to report server response time, measured at the server.|
+| [Availability](#availability) | Reports [availability web tests](./monitor-web-app-availability.md). |
+| [Event](#events) | Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
+| [Exceptions](#exceptions) |Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser.
+| [Messages](#trace-messages) | Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md).
+| [Metrics](#metrics) | Generated by metric API calls.
+| [PerformanceCounters](#application-insights-export-data-model) | Performance Counters collected by Application Insights.
+| [Requests](#requests)| Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use requests to report server response time, measured at the server.|
### Edit continuous export
Time durations are in ticks, where 10 000 ticks = 1 ms. For example, these value
"clientProcess": {"value": 17970000.0} ```
-For a detailed data model reference for the property types and values, see [Application Insights export data model](export-data-model.md).
+For a detailed data model reference for the property types and values, see [Application Insights export data model](#application-insights-export-data-model).
## Process the data On a small scale, you can write some code to pull apart your data and read it into a spreadsheet. For example:
Yes. Select **Disable**.
* [Stream Analytics sample](../../stream-analytics/app-insights-export-stream-analytics.md) * [Export to SQL by using Stream Analytics][exportasa]
-* [Detailed data model reference for property types and values](export-data-model.md)
+* [Detailed data model reference for property types and values](#application-insights-export-data-model)
## Diagnostic settings-based export
To migrate to diagnostic settings export:
> > These steps are necessary because Application Insights accesses telemetry across Application Insight resources, including Log Analytics workspaces, to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources that contain the same data.
+## Application Insights Export Data Model
+This table lists the properties of telemetry sent from the [Application Insights](./app-insights-overview.md) SDKs to the portal.
+You'll see these properties in data output from [Continuous Export](export-telemetry.md).
+They also appear in property filters in [Metric Explorer](../essentials/metrics-charts.md) and [Diagnostic Search](./diagnostic-search.md).
+
+Points to note:
+
+* `[0]` in these tables denotes a point in the path where you have to insert an index; but it isn't always 0.
+* Time durations are in tenths of a microsecond, so 10000000 == 1 second.
+* Dates and times are UTC, and are given in the ISO format `yyyy-MM-DDThh:mm:ss.sssZ`
+
+### Example
+
+```json
+// A server report about an HTTP request
+{
+ "request": [
+ {
+ "urlData": { // derived from 'url'
+ "host": "contoso.org",
+ "base": "/",
+ "hashTag": ""
+ },
+ "responseCode": 200, // Sent to client
+ "success": true, // Default == responseCode<400
+ // Request id becomes the operation id of child events
+ "id": "fCOhCdCnZ9I=",
+ "name": "GET Home/Index",
+ "count": 1, // 100% / sampling rate
+ "durationMetric": {
+ "value": 1046804.0, // 10000000 == 1 second
+ // Currently the following fields are redundant:
+ "count": 1.0,
+ "min": 1046804.0,
+ "max": 1046804.0,
+ "stdDev": 0.0,
+ "sampledValue": 1046804.0
+ },
+ "url": "/"
+ }
+ ],
+ "internal": {
+ "data": {
+ "id": "7f156650-ef4c-11e5-8453-3f984b167d05",
+ "documentVersion": "1.61"
+ }
+ },
+ "context": {
+ "device": { // client browser
+ "type": "PC",
+ "screenResolution": { },
+ "roleInstance": "WFWEB14B.fabrikam.net"
+ },
+ "application": { },
+ "location": { // derived from client ip
+ "continent": "North America",
+ "country": "United States",
+ // last octagon is anonymized to 0 at portal:
+ "clientip": "168.62.177.0",
+ "province": "",
+ "city": ""
+ },
+ "data": {
+ "isSynthetic": true, // we identified source as a bot
+ // percentage of generated data sent to portal:
+ "samplingRate": 100.0,
+ "eventTime": "2016-03-21T10:05:45.7334717Z" // UTC
+ },
+ "user": {
+ "isAuthenticated": false,
+ "anonId": "us-tx-sn1-azr", // bot agent id
+ "anonAcquisitionDate": "0001-01-01T00:00:00Z",
+ "authAcquisitionDate": "0001-01-01T00:00:00Z",
+ "accountAcquisitionDate": "0001-01-01T00:00:00Z"
+ },
+ "operation": {
+ "id": "fCOhCdCnZ9I=",
+ "parentId": "fCOhCdCnZ9I=",
+ "name": "GET Home/Index"
+ },
+ "cloud": { },
+ "serverDevice": { },
+ "custom": { // set by custom fields of track calls
+ "dimensions": [ ],
+ "metrics": [ ]
+ },
+ "session": {
+ "id": "65504c10-44a6-489e-b9dc-94184eb00d86",
+ "isFirst": true
+ }
+ }
+}
+```
+
+### Context
+All types of telemetry are accompanied by a context section. Not all of these fields are transmitted with every data point.
+
+| Path | Type | Notes |
+| | | |
+| context.custom.dimensions [0] |object [ ] |Key-value string pairs set by custom properties parameter. Key max length 100, values max length 1024. More than 100 unique values, the property can be searched but cannot be used for segmentation. Max 200 keys per ikey. |
+| context.custom.metrics [0] |object [ ] |Key-value pairs set by custom measurements parameter and by TrackMetrics. Key max length 100, values may be numeric. |
+| context.data.eventTime |string |UTC |
+| context.data.isSynthetic |boolean |Request appears to come from a bot or web test. |
+| context.data.samplingRate |number |Percentage of telemetry generated by the SDK that is sent to portal. Range 0.0-100.0. |
+| context.device |object |Client device |
+| context.device.browser |string |IE, Chrome, ... |
+| context.device.browserVersion |string |Chrome 48.0, ... |
+| context.device.deviceModel |string | |
+| context.device.deviceName |string | |
+| context.device.id |string | |
+| context.device.locale |string |en-GB, de-DE, ... |
+| context.device.network |string | |
+| context.device.oemName |string | |
+| context.device.os |string | |
+| context.device.osVersion |string |Host OS |
+| context.device.roleInstance |string |ID of server host |
+| context.device.roleName |string | |
+| context.device.screenResolution |string | |
+| context.device.type |string |PC, Browser, ... |
+| context.location |object |Derived from `clientip`. |
+| context.location.city |string |Derived from `clientip`, if known |
+| context.location.clientip |string |Last octagon is anonymized to 0. |
+| context.location.continent |string | |
+| context.location.country |string | |
+| context.location.province |string |State or province |
+| context.operation.id |string |Items that have the same `operation id` are shown as Related Items in the portal. Usually the `request id`. |
+| context.operation.name |string |url or request name |
+| context.operation.parentId |string |Allows nested related items. |
+| context.session.id |string |`Id` of a group of operations from the same source. A period of 30 minutes without an operation signals the end of a session. |
+| context.session.isFirst |boolean | |
+| context.user.accountAcquisitionDate |string | |
+| context.user.accountId |string | |
+| context.user.anonAcquisitionDate |string | |
+| context.user.anonId |string | |
+| context.user.authAcquisitionDate |string |[Authenticated User](./api-custom-events-metrics.md#authenticated-users) |
+| context.user.authId |string | |
+| context.user.isAuthenticated |boolean | |
+| context.user.storeRegion |string | |
+| internal.data.documentVersion |string | |
+| internal.data.id |string | `Unique id` that is assigned when an item is ingested to Application Insights |
+
+### Events
+Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent).
+
+| Path | Type | Notes |
+| | | |
+| event [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| event [0] name |string |Event name. Max length 250. |
+| event [0] url |string | |
+| event [0] urlData.base |string | |
+| event [0] urlData.host |string | |
+
+### Exceptions
+Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser.
+
+| Path | Type | Notes |
+| | | |
+| basicException [0] assembly |string | |
+| basicException [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| basicException [0] exceptionGroup |string | |
+| basicException [0] exceptionType |string | |
+| basicException [0] failedUserCodeMethod |string | |
+| basicException [0] failedUserCodeAssembly |string | |
+| basicException [0] handledAt |string | |
+| basicException [0] hasFullStack |boolean | |
+| basicException [0] `id` |string | |
+| basicException [0] method |string | |
+| basicException [0] message |string |Exception message. Max length 10k. |
+| basicException [0] outerExceptionMessage |string | |
+| basicException [0] outerExceptionThrownAtAssembly |string | |
+| basicException [0] outerExceptionThrownAtMethod |string | |
+| basicException [0] outerExceptionType |string | |
+| basicException [0] outerId |string | |
+| basicException [0] parsedStack [0] assembly |string | |
+| basicException [0] parsedStack [0] fileName |string | |
+| basicException [0] parsedStack [0] level |integer | |
+| basicException [0] parsedStack [0] line |integer | |
+| basicException [0] parsedStack [0] method |string | |
+| basicException [0] stack |string |Max length 10k |
+| basicException [0] typeName |string | |
+
+### Trace Messages
+Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md).
+
+| Path | Type | Notes |
+| | | |
+| message [0] loggerName |string | |
+| message [0] parameters |string | |
+| message [0] raw |string |The log message, max length 10k. |
+| message [0] severityLevel |string | |
+
+### Remote dependency
+Sent by TrackDependency. Used to report performance and usage of [calls to dependencies](./asp-net-dependencies.md) in the server, and AJAX calls in the browser.
+
+| Path | Type | Notes |
+| | | |
+| remoteDependency [0] async |boolean | |
+| remoteDependency [0] baseName |string | |
+| remoteDependency [0] commandName |string |For example "home/index" |
+| remoteDependency [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| remoteDependency [0] dependencyTypeName |string |HTTP, SQL, ... |
+| remoteDependency [0] durationMetric.value |number |Time from call to completion of response by dependency |
+| remoteDependency [0] `id` |string | |
+| remoteDependency [0] name |string |Url. Max length 250. |
+| remoteDependency [0] resultCode |string |from HTTP dependency |
+| remoteDependency [0] success |boolean | |
+| remoteDependency [0] type |string |Http, Sql,... |
+| remoteDependency [0] url |string |Max length 2000 |
+| remoteDependency [0] urlData.base |string |Max length 2000 |
+| remoteDependency [0] urlData.hashTag |string | |
+| remoteDependency [0] urlData.host |string |Max length 200 |
+
+### Requests
+Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use this to reports server response time, measured at the server.
+
+| Path | Type | Notes |
+| | | |
+| request [0] count |integer |100/([sampling](./sampling.md) rate). For example: 4 =&gt; 25%. |
+| request [0] durationMetric.value |number |Time from request arriving to response. 1e7 == 1s |
+| request [0] `id` |string |`Operation id` |
+| request [0] name |string |GET/POST + url base. Max length 250 |
+| request [0] responseCode |integer |HTTP response sent to client |
+| request [0] success |boolean |Default == (responseCode &lt; 400) |
+| request [0] url |string |Not including host |
+| request [0] urlData.base |string | |
+| request [0] urlData.hashTag |string | |
+| request [0] urlData.host |string | |
+
+### Page View Performance
+Sent by the browser. Measures the time to process a page, from user initiating the request to display complete (excluding async AJAX calls).
+
+Context values show client OS and browser version.
+
+| Path | Type | Notes |
+| | | |
+| clientPerformance [0] clientProcess.value |integer |Time from end of receiving the HTML to displaying the page. |
+| clientPerformance [0] name |string | |
+| clientPerformance [0] networkConnection.value |integer |Time taken to establish a network connection. |
+| clientPerformance [0] receiveRequest.value |integer |Time from end of sending the request to receiving the HTML in reply. |
+| clientPerformance [0] sendRequest.value |integer |Time from taken to send the HTTP request. |
+| clientPerformance [0] total.value |integer |Time from starting to send the request to displaying the page. |
+| clientPerformance [0] url |string |URL of this request |
+| clientPerformance [0] urlData.base |string | |
+| clientPerformance [0] urlData.hashTag |string | |
+| clientPerformance [0] urlData.host |string | |
+| clientPerformance [0] urlData.protocol |string | |
+
+### Page Views
+Sent by trackPageView() or [stopTrackPage](./api-custom-events-metrics.md#page-views)
+
+| Path | Type | Notes |
+| | | |
+| view [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| view [0] durationMetric.value |integer |Value optionally set in trackPageView() or by startTrackPage() - stopTrackPage(). Not the same as clientPerformance values. |
+| view [0] name |string |Page title. Max length 250 |
+| view [0] url |string | |
+| view [0] urlData.base |string | |
+| view [0] urlData.hashTag |string | |
+| view [0] urlData.host |string | |
+
+### Availability
+Reports [availability web tests](./monitor-web-app-availability.md).
+
+| Path | Type | Notes |
+| | | |
+| availability [0] availabilityMetric.name |string |availability |
+| availability [0] availabilityMetric.value |number |1.0 or 0.0 |
+| availability [0] count |integer |100/([sampling](./sampling.md) rate). For example 4 =&gt; 25%. |
+| availability [0] dataSizeMetric.name |string | |
+| availability [0] dataSizeMetric.value |integer | |
+| availability [0] durationMetric.name |string | |
+| availability [0] durationMetric.value |number |Duration of test. 1e7==1s |
+| availability [0] message |string |Failure diagnostic |
+| availability [0] result |string |Pass/Fail |
+| availability [0] runLocation |string |Geo source of http req |
+| availability [0] testName |string | |
+| availability [0] testRunId |string | |
+| availability [0] testTimestamp |string | |
+
+### Metrics
+Generated by TrackMetric().
+
+The metric value is found in context.custom.metrics[0]
+
+For example:
+
+```json
+{
+ "metric": [ ],
+ "context": {
+ ...
+ "custom": {
+ "dimensions": [
+ { "ProcessId": "4068" }
+ ],
+ "metrics": [
+ {
+ "dispatchRate": {
+ "value": 0.001295,
+ "count": 1.0,
+ "min": 0.001295,
+ "max": 0.001295,
+ "stdDev": 0.0,
+ "sampledValue": 0.001295,
+ "sum": 0.001295
+ }
+ }
+ ]
+ }
+ }
+}
+```
+
+### About metric values
+Metric values, both in metric reports and elsewhere, are reported with a standard object structure. For example:
+
+```json
+"durationMetric": {
+ "name": "contoso.org",
+ "type": "Aggregation",
+ "value": 468.71603053650279,
+ "count": 1.0,
+ "min": 468.71603053650279,
+ "max": 468.71603053650279,
+ "stdDev": 0.0,
+ "sampledValue": 468.71603053650279
+}
+```
+
+Currently - though this might change in the future - in all values reported from the standard SDK modules, `count==1` and only the `name` and `value` fields are useful. The only case where they would be different would be if you write your own TrackMetric calls in which you set the other parameters.
+
+The purpose of the other fields is to allow metrics to be aggregated in the SDK, to reduce traffic to the portal. For example, you could average several successive readings before sending each metric report. Then you would calculate the min, max, standard deviation and aggregate value (sum or average) and set count to the number of readings represented by the report.
+
+In the tables above, we have omitted the rarely used fields count, min, max, stdDev, and sampledValue.
+
+Instead of pre-aggregating metrics, you can use [sampling](./sampling.md) if you need to reduce the volume of telemetry.
+
+#### Durations
+Except where otherwise noted, durations are represented in tenths of a microsecond, so that 10000000.0 means 1 second.
+
+## See also
+* [Application Insights](./app-insights-overview.md)
+* [Continuous Export](export-telemetry.md)
+* [Code samples](export-telemetry.md#code-samples)
+ <!--Link references--> [exportasa]: ../../stream-analytics/app-insights-export-sql-stream-analytics.md
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Most configuration fields are named so that they can default to false. All field
| disableFlush&#8203;OnBeforeUnload | If true, flush method won't be called when `onBeforeUnload` event triggers. | boolean<br/> false | | enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load. | boolean<br />true | | cookieCfg | Defaults to cookie usage enabled. For full defaults, see [ICookieCfgConfig](#icookiemgrconfig) settings. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined |
-| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. `isCookieUseDisable` is deprecated in favor of `disableCookiesUsage`. When both are provided, `disableCookiesUsage` takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined, it will take precedence over these values. Cookie usage can be re-enabled after initialization via `core.getCookieMgr().setEnabled(true)`. | Alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
+| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage panes and experiences useless. `isCookieUseDisable` is deprecated in favor of `disableCookiesUsage`. When both are provided, `disableCookiesUsage` takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined, it will take precedence over these values. Cookie usage can be re-enabled after initialization via `core.getCookieMgr().setEnabled(true)`. | Alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined, it will take precedence over this value. | Alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null | | cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it will take precedence over this value. | Alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null | | isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected). | boolean<br/>false |
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
- Title: 'Application Insights: Languages, platforms, and integrations | Microsoft Docs'
-description: Languages, platforms, and integrations that are available for Application Insights.
- Previously updated : 11/15/2022---
-# Supported languages
-
-* [C#|VB (.NET)](./asp-net.md)
-* [Java](./java-in-process-agent.md)
-* [JavaScript](./javascript.md)
-* [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
-
-## Supported platforms and frameworks
-
-Supported platforms and frameworks are listed here.
-
-### Azure service integration (portal enablement, Azure Resource Manager deployments)
-* [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md)
-* [Azure App Service](./azure-web-apps.md)
-* [Azure Functions](../../azure-functions/functions-monitoring.md)
-* [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles
-
-### Auto-instrumentation (enable without code changes)
-* [ASP.NET - for web apps hosted with IIS](./status-monitor-v2-overview.md)
-* [ASP.NET Core - for web apps hosted with IIS](./status-monitor-v2-overview.md)
-* [Java](./java-in-process-agent.md)
-
-### Manual instrumentation / SDK (some code changes required)
-* [ASP.NET](./asp-net.md)
-* [ASP.NET Core](./asp-net-core.md)
-* [Node.js](./nodejs.md)
-* [Python](./opencensus-python.md)
-* [JavaScript - web](./javascript.md)
- * [React](./javascript-react-plugin.md)
- * [React Native](./javascript-react-native-plugin.md)
- * [Angular](./javascript-angular-plugin.md)
-* [Windows desktop applications, services, and worker roles](./windows-desktop.md)
-* [Universal Windows app](../app/mobile-center-quickstart.md) (App Center)
-* [Android](../app/mobile-center-quickstart.md) (App Center)
-* [iOS](../app/mobile-center-quickstart.md) (App Center)
-
-> [!NOTE]
-> OpenTelemetry-based instrumentation is available in preview for [C#, Node.js, and Python](opentelemetry-enable.md). Review the limitations noted at the beginning of each language's official documentation. If you require a full-feature experience, use the existing Application Insights SDKs.
-
-## Logging frameworks
-* [ILogger](./ilogger.md)
-* [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md)
-* [Log4J, Logback, or java.util.logging](./java-in-process-agent.md#autocollected-logs)
-* [LogStash plug-in](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights)
-* [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms)
-
-## Export and data analysis
-* [Power BI](https://powerbi.microsoft.com/blog/explore-your-application-insights-data-with-power-bi/)
-* [Power BI for workspace-based resources](../logs/log-powerbi.md)
-
-## Unsupported SDKs
-Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when you use the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
Which features of your web or mobile app are most popular? Do your users achieve
The best experience is obtained by installing Application Insights both in your app server code and in your webpages. The client and server components of your app send telemetry back to the Azure portal for analysis.
-1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./java-in-process-agent.md), [Node.js](./nodejs.md), or [other](./platforms.md) app.
+1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./java-in-process-agent.md), [Node.js](./nodejs.md), or [other](./app-insights-overview.md#supported-languages) app.
* If you don't want to install server code, [create an Application Insights resource](./create-new-resource.md).
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
There are certain scenarios though where you may need to continue using Operatio
- [Availability tests](app/monitor-web-app-availability.md), which allow you to monitor and alert on the availability and responsiveness of your applications require incoming requests from the IP addresses of web test agents. If your policy won't allow such access, you may need to keep using [Web Application Availability Monitors](/system-center/scom/web-application-availability-monitoring-template) in Operations Manager. - In Operations Manager you can set any polling interval for availability tests, with many customers checking every 60-120 seconds. Application Insights has a minimum polling interval of 5 minutes which may be too long for some customers. - A significant amount of monitoring in Operations Manager is performed by collecting events generated by applications and by running scripts on the local agent. These aren't standard options in Application Insights, so you could require custom work to achieve your business requirements. This might include custom alert rules using event data stored in a Log Analytics workspace and scripts launched in a virtual machines guest using [hybrid runbook worker](../automation/automation-hybrid-runbook-worker.md).-- Depending on the language that your application is written in, you may be limited in the [instrumentation you can use with Application Insights](app/platforms.md).
+- Depending on the language that your application is written in, you may be limited in the [instrumentation you can use with Application Insights](app/app-insights-overview.md#supported-languages).
Following the basic strategy in the other sections of this guide, continue to use Operations Manager for your business applications, but take advantage of additional features provided by Application Insights. As you're able to replace critical functionality with Azure Monitor, you can start to retire your custom management packs.
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
To enable monitoring for an application, you must decide whether you'll use code
- [Java](app/java-in-process-agent.md) - [Node.js](app/nodejs.md) - [Python](app/opencensus-python.md)-- [Other platforms](app/platforms.md)
+- [Other platforms](app/app-insights-overview.md#supported-languages)
### Configure availability testing
azure-monitor Ad Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/ad-assessment.md
View the summarized compliance assessments for your infrastructure and then dril
1. On the **Overview** page, click the **Active Directory Health Check** tile.
-2. On the **Health Check** page, review the summary information in one of the focus area panes and then click one to view recommendations for that focus area.
+2. On the **Health Check** page, review the summary information in one of the focus area sections and then click one to view recommendations for that focus area.
3. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.
azure-monitor Capacity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/capacity-performance.md
Click on the Capacity and Performance tile to open the Capacity and Performance
- **Host Density** The top tile shows the total number of hosts and virtual machines available to the solution. Click the top tile to view additional details in log search. Also lists all hosts and the number of virtual machines that are hosted. Click a host to drill into the VM results in a log search.
-![dashboard Hosts blade](./media/capacity-performance/dashboard-hosts.png)
+![dashboard Hosts columns](./media/capacity-performance/dashboard-hosts.png)
-![dashboard virtual machines blade](./media/capacity-performance/dashboard-vms.png)
+![dashboard virtual machines columns](./media/capacity-performance/dashboard-vms.png)
### Evaluate performance
azure-monitor Dns Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/dns-analytics.md
From the Log Analytics workspace in the Azure portal, select **Workspace summary
You can modify the list to add any domain name suffix that you want to view lookup insights for. You can also remove any domain name suffix that you don't want to view lookup insights for. -- **Talkative Client Threshold**. DNS clients that exceed the threshold for the number of lookup requests are highlighted in the **DNS Clients** blade. The default threshold is 1,000. You can edit the threshold.
+- **Talkative Client Threshold**. DNS clients that exceed the threshold for the number of lookup requests are highlighted in the **DNS Clients** pane. The default threshold is 1,000. You can edit the threshold.
![Allowlisted domain names](./media/dns-analytics/dns-config.png)
The solution dashboard shows summarized information for the various features of
![Time selection control](./media/dns-analytics/dns-time.png)
-The solution dashboard shows the following blades:
+The solution dashboard shows the following sections:
**DNS Security**. Reports the DNS clients that are trying to communicate with malicious domains. By using Microsoft threat intelligence feeds, DNS Analytics can detect client IPs that are trying to access malicious domains. In many cases, malware-infected devices "dial out" to the "command and control" center of the malicious domain by resolving the malware domain name.
-![DNS Security blade](./media/dns-analytics/dns-security-blade.png)
+![DNS Security section](./media/dns-analytics/dns-security-blade.png)
When you click a client IP in the list, Log Search opens and shows the lookup details of the respective query. In the following example, DNS Analytics detected that the communication was done with an [IRCbot](https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=Backdoor:Win32/IRCbot&threatId=2621):
The information helps you to identify the:
**Domains Queried**. Provides the most frequent domain names being queried by the DNS clients in your environment. You can view the list of all the domain names queried. You can also drill down into the lookup request details of a specific domain name in Log Search.
-![Domains Queried blade](./media/dns-analytics/domains-queried-blade.png)
+![Domains Queried section](./media/dns-analytics/domains-queried-blade.png)
**DNS Clients**. Reports the clients *breaching the threshold* for number of queries in the chosen time period. You can view the list of all the DNS clients and the details of the queries made by them in Log Search.
-![DNS Clients blade](./media/dns-analytics/dns-clients-blade.png)
+![DNS Clients section](./media/dns-analytics/dns-clients-blade.png)
**Dynamic DNS Registrations**. Reports name registration failures. All registration failures for address [resource records](https://en.wikipedia.org/wiki/List_of_DNS_record_types) (Type A and AAAA) are highlighted along with the client IPs that made the registration requests. You can then use this information to find the root cause of the registration failure by following these steps:
The information helps you to identify the:
1. Check whether the zone is configured for secure dynamic update or not.
- ![Dynamic DNS Registrations blade](./media/dns-analytics/dynamic-dns-reg-blade.png)
+ ![Dynamic DNS Registrations section](./media/dns-analytics/dynamic-dns-reg-blade.png)
**Name registration requests**. The upper tile shows a trendline of successful and failed DNS dynamic update requests. The lower tile lists the top 10 clients that are sending failed DNS update requests to the DNS servers, sorted by the number of failures.
-![Name registration requests blade](./media/dns-analytics/name-reg-req-blade.png)
+![Name registration requests section](./media/dns-analytics/name-reg-req-blade.png)
**Sample DDI Analytics Queries**. Contains a list of the most common search queries that fetch raw analytics data directly.
azure-monitor Scom Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/scom-assessment.md
View the summarized compliance assessments for your infrastructure and then dril
2. In the Azure portal, click **More services** found on the lower left-hand corner. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics**. 3. In the Log Analytics subscriptions pane, select a workspace and then click the **Workspace summary** menu item. 4. On the **Overview** page, click the **System Center Operations Manager Health Check** tile.
-5. On the **System Center Operations Manager Health Check** page, review the summary information in one of the focus area blades and then click one to view recommendations for that focus area.
+5. On the **System Center Operations Manager Health Check** page, review the summary information in one of the focus area sections and then click one to view recommendations for that focus area.
6. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.<br><br> ![focus area](./media/scom-assessment/log-analytics-scom-healthcheck-dashboard-02.png)<br> 7. You can take corrective actions suggested in **Suggested Actions**. When the item has been addressed, later assessments will record that recommended actions were taken and your compliance score will increase. Corrected items appear as **Passed Objects**.
azure-monitor Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-assessment.md
View the summarized compliance assessments for your infrastructure and then dril
2. In the Azure portal, click **More services** found on the lower left-hand corner. In the list of resources, type **Monitor**. As you begin typing, the list filters based on your input. Select **Monitor**. 3. In the **Insights** section of the menu, select **More**. 4. On the **Overview** page, click the **SQL Health Check** tile.
-5. On the **Health Check** page, review the summary information in one of the focus area blades and then click one to view recommendations for that focus area.
+5. On the **Health Check** page, review the summary information in one of the focus area sections and then click one to view recommendations for that focus area.
6. On any of the focus area pages, you can view the prioritized recommendations made for your environment. Click a recommendation under **Affected Objects** to view details about why the recommendation is made.<br><br> ![image of SQL Health Check recommendations](./media/sql-assessment/sql-healthcheck-dashboard-02.png)<br> 7. You can take corrective actions suggested in **Suggested Actions**. When the item has been addressed, later assessments will record that recommended actions were taken and your compliance score will increase. Corrected items appear as **Passed Objects**.
azure-monitor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/vmware.md
The VMware tile appears in your Log Analytics workspace. It provides a high-leve
![Screenshot shows the VMware tile, displaying nine failures.](./media/vmware/tile.png) #### Navigate the dashboard view
-In the **VMware** dashboard view, blades are organized by:
+In the **VMware** dashboard view, sections are organized by:
* Failure Status Count * Top Host by Event Counts
In the **VMware** dashboard view, blades are organized by:
![solution2](./media/vmware/solutionview1-2.png)
-Click any blade to open Log Analytics search pane that shows detailed information specific for the blade.
+Click any section to open Log Analytics search pane that shows detailed information specific for the section.
From here, you can edit the log query to modify it for something specific. For details on creating log queries, see [Find data using log queries in Azure Monitor](../logs/log-query-overview.md).
You can drill further by clicking an ESXi host or an event type.
When you click an ESXi host name, you view information from that ESXi host. If you want to narrow results with the event type, add `ΓÇ£ProcessName_s=EVENT TYPEΓÇ¥` in your search query. You can select **ProcessName** in the search filter. That narrows the information for you.
-![Screenshot of the ESXi Host Per Event Count and Breakdown Per Event Type blades in the VMware Monitoring dashboard view.](./media/vmware/eventhostdrilldown.png)
+![Screenshot of the ESXi Host Per Event Count and Breakdown Per Event Type sections in the VMware Monitoring dashboard view.](./media/vmware/eventhostdrilldown.png)
#### Find high VM activities A virtual machine can be created and deleted on any ESXi host. It's helpful for an administrator to identify how many VMs an ESXi host creates. That in-turn, helps to understand performance and capacity planning. Keeping track of VM activity events is crucial when managing your environment.
-![Screenshot of the Virtual Machine Activities blade in the VMware Monitoring dashboard, showing a graph of VM creation and deletion by the ESXi host.](./media/vmware/vmactivities1.png)
+![Screenshot of the Virtual Machine Activities section in the VMware Monitoring dashboard, showing a graph of VM creation and deletion by the ESXi host.](./media/vmware/vmactivities1.png)
If you want to see additional ESXi host VM creation data, click an ESXi host name.
azure-monitor Wire Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/wire-data.md
rpm -e dependency-agent dependency-agent-connector
## Using the Wire Data 2.0 solution
-In the **Overview** page for your Log Analytics workspace in the Azure portal, click the **Wire Data 2.0** tile to open the Wire Data dashboard. The dashboard includes the blades in the following table. Each blade lists up to 10 items matching that blade's criteria for the specified scope and time range. You can run a log search that returns all records by clicking **See all** at the bottom of the blade or by clicking the blade header.
+In the **Overview** page for your Log Analytics workspace in the Azure portal, click the **Wire Data 2.0** tile to open the Wire Data dashboard. The dashboard includes the sections in the following table. Each section lists up to 10 items matching that section's criteria for the specified scope and time range. You can run a log search that returns all records by clicking **See all** at the bottom of the section or by clicking the section header.
-| **Blade** | **Description** |
+| **Section** | **Description** |
| | | | Agents capturing network traffic | Shows the number of agents that are capturing network traffic and lists the top 10 computers that are capturing traffic. Click the number to run a log search for <code>WireData \| summarize sum(TotalBytes) by Computer \| take 500000</code>. Click a computer in the list to run a log search returning the total number of bytes captured. | | Local Subnets | Shows the number of local subnets that agents have discovered. Click the number to run a log search for <code>WireData \| summarize sum(TotalBytes) by LocalSubnet</code> that lists all subnets with the number of bytes sent over each one. Click a subnet in the list to run a log search returning the total number of bytes sent over the subnet. |
In the **Overview** page for your Log Analytics workspace in the Azure portal, c
![Wire Data dashboard](./media/wire-data/wire-data-dash.png)
-You can use the **Agents capturing network traffic** blade to determine how much network bandwidth is being consumed by computers. This blade can help you easily find the _chattiest_ computer in your environment. Such computers could be overloaded, acting abnormally, or using more network resources than normal.
+You can use the **Agents capturing network traffic** section to determine how much network bandwidth is being consumed by computers. This section can help you easily find the _chattiest_ computer in your environment. Such computers could be overloaded, acting abnormally, or using more network resources than normal.
-![Screenshot of the Agents capturing network traffic blade in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each computer.](./media/wire-data/log-search-example01.png)
+![Screenshot of the Agents capturing network traffic section in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each computer.](./media/wire-data/log-search-example01.png)
-Similarly, you can use the **Local Subnets** blade to determine how much network traffic is moving through your subnets. Users often define subnets around critical areas for their applications. This blade offers a view into those areas.
+Similarly, you can use the **Local Subnets** section to determine how much network traffic is moving through your subnets. Users often define subnets around critical areas for their applications. This section offers a view into those areas.
-![Screenshot of the Local Subnets blade in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each LocalSubnet.](./media/wire-data/log-search-example02.png)
+![Screenshot of the Local Subnets section in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each LocalSubnet.](./media/wire-data/log-search-example02.png)
-The **Application-level Protocols** blade is useful because it's helpful know what protocols are in use. For example, you might expect SSH to not be in use in your network environment. Viewing information available in the blade can quickly confirm or disprove your expectation.
+The **Application-level Protocols** section is useful because it's helpful know what protocols are in use. For example, you might expect SSH to not be in use in your network environment. Viewing information available in the section can quickly confirm or disprove your expectation.
-![Screenshot of the Application-level Protocols blade in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each protocol.](./media/wire-data/log-search-example03.png)
+![Screenshot of the Application-level Protocols section in the Wire Data 2.0 dashboard showing the network bandwidth consumed by each protocol.](./media/wire-data/log-search-example03.png)
It's also useful to know if protocol traffic is increasing or decreasing over time. For example, if the amount of data being transmitted by an application is increasing, that might be something you should be aware of, or that you might find noteworthy.
azure-monitor App Insights Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-insights-connector.md
This solution does not install any management packs in connected management grou
## Use the solution
-The following sections describe how you can use the panes shown in the Application Insights dashboard to view and interact with data from your apps.
+The following sections describe how you can use the sections shown in the Application Insights dashboard to view and interact with data from your apps.
### View Application Insights Connector information
-Click the **Application Insights** tile to open the **Application Insights** dashboard to see the following panes.
+Click the **Application Insights** tile to open the **Application Insights** dashboard to see the following sections.
-![Screenshot of the Application Insights dashboard showing the panes for Applications, Data Volume, and Availability.](./media/app-insights-connector/app-insights-dash01.png)
+![Screenshot of the Application Insights dashboard showing the sections for Applications, Data Volume, and Availability.](./media/app-insights-connector/app-insights-dash01.png)
-![Screenshot of the Application Insights dashboard showing the panes for Server Requests, Failures, and Exceptions.](./media/app-insights-connector/app-insights-dash02.png)
+![Screenshot of the Application Insights dashboard showing the sections for Server Requests, Failures, and Exceptions.](./media/app-insights-connector/app-insights-dash02.png)
-The dashboard includes the panes shown in the table. Each pane lists up to 10 items matching that pane's criteria for the specified scope and time range. You can run a log search that returns all records when you click **See all** at the bottom of the pane or when you click the pane header.
+The dashboard includes the sections shown in the table. Each section lists up to 10 items matching that section's criteria for the specified scope and time range. You can run a log search that returns all records when you click **See all** at the bottom of the section or when you click the section header.
| **Column** | **Description** |
The dashboard includes the panes shown in the table. Each pane lists up to 10 it
When you click any item in the dashboard, you see an Application Insights perspective shown in search. The perspective provides an extended visualization, based on the telemetry type that selected. So, visualization content changes for different telemetry types.
-When you click anywhere in the Applications pane, you see the default **Applications** perspective.
+When you click anywhere in the Applications section, you see the default **Applications** perspective.
![Application Insights Applications perspective](./media/app-insights-connector/applications-blade-drill-search.png) The perspective shows an overview of the application that you selected.
-The **Availability** pane shows a different perspective view where you can see web test results and related failed requests.
+The **Availability** section shows a different perspective view where you can see web test results and related failed requests.
![Application Insights Availability perspective](./media/app-insights-connector/availability-blade-drill-search.png)
-When you click anywhere in the **Server Requests** or **Failures** panes, the perspective components change to give you a visualization that related to requests.
+When you click anywhere in the **Server Requests** or **Failures** sections, the perspective components change to give you a visualization that related to requests.
-![Application Insights Failures pane](./media/app-insights-connector/server-requests-failures-drill-search.png)
+![Application Insights Failures section](./media/app-insights-connector/server-requests-failures-drill-search.png)
-When you click anywhere in the **Exceptions** pane, you see a visualization that's tailored to exceptions.
+When you click anywhere in the **Exceptions** section, you see a visualization that's tailored to exceptions.
-![Application Insights Exceptions pane](./media/app-insights-connector/exceptions-blade-drill-search.png)
+![Application Insights Exceptions section](./media/app-insights-connector/exceptions-blade-drill-search.png)
Regardless of whether you click something one the **Application Insights Connector** dashboard, within the **Search** page itself, any query returning Application Insights data shows the Application Insights perspective. For example, if you are viewing Application Insights data, a **&#42;** query also shows the perspective tab like the following image:
Perspective components are updated depending on the search query. This means tha
### Pivot to an app in the Azure portal
-Application Insights Connector panes are designed to enable you to pivot to the selected Application Insights app *when you use the Azure portal*. You can use the solution as a high-level monitoring platform that helps you troubleshoot an app. When you see a potential problem in any of your connected applications, you can either drill into it in Log Analytics search or you can pivot directly to the Application Insights app.
+Application Insights Connector sections are designed to enable you to pivot to the selected Application Insights app *when you use the Azure portal*. You can use the solution as a high-level monitoring platform that helps you troubleshoot an app. When you see a potential problem in any of your connected applications, you can either drill into it in Log Analytics search or you can pivot directly to the Application Insights app.
To pivot, click the ellipses (**…**) that appears at the end of each line, and select **Open in Application Insights**.
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-aspnetcore-linux.md
In this guide, you'll:
You can add Application Insights to your web app either via: -- The Enablement blade in the Azure portal,-- The Configuration blade in the Azure portal, or
+- The Application Insights pane in the Azure portal,
+- The Configuration pane in the Azure portal, or
- Manually adding to your web app settings.
-# [Enablement blade](#tab/enablement)
+# [Application Insights pane](#tab/enablement)
1. In your web app on the Azure portal, select **Application Insights** in the left side menu. 1. Click **Turn on Application Insights**.
You can add Application Insights to your web app either via:
1. Click **Apply** > **Yes** to apply and confirm.
-# [Configuration blade](#tab/config)
+# [Configuration pane](#tab/config)
1. [Create an Application Insights resource](../app/create-workspace-resource.md) in the same Azure subscription as your App Service. 1. Navigate to the Application Insights resource.
You can add Application Insights to your web app either via:
1. In your web app on the Azure portal, select **Configuration** in the left side menu. 1. Click **New application setting**.
- :::image type="content" source="./media/profiler-aspnetcore-linux/new-setting-configuration.png" alt-text="Screenshot of adding new application setting in the configuration blade.":::
+ :::image type="content" source="./media/profiler-aspnetcore-linux/new-setting-configuration.png" alt-text="Screenshot of adding new application setting in the configuration pane.":::
1. Add the following settings in the **Add/Edit application setting** pane, using your saved iKey:
azure-monitor Profiler Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-azure-functions.md
From your Functions app overview page in the Azure portal:
1. Click **Save** in the top menu, then **Continue**.
- :::image type="content" source="./media/profiler-azure-functions/save-button.png" alt-text="Screenshot outlining the save button in the top menu of the configuration blade.":::
+ :::image type="content" source="./media/profiler-azure-functions/save-button.png" alt-text="Screenshot outlining the save button in the top menu of the configuration pane.":::
:::image type="content" source="./media/profiler-azure-functions/continue-button.png" alt-text="Screenshot outlining the continue button in the dialog after saving."::: The app settings now show up in the table:
- :::image type="content" source="./media/profiler-azure-functions/app-settings-table.png" alt-text="Screenshot showing the two new app settings in the table on the configuration blade.":::
+ :::image type="content" source="./media/profiler-azure-functions/app-settings-table.png" alt-text="Screenshot showing the two new app settings in the table on the configuration pane.":::
> [!NOTE]
azure-monitor Profiler Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-containers.md
Service Profiler session finished. # A profiling session is complet
## View the Service Profiler traces 1. Wait for 2-5 minutes so the events can be aggregated to Application Insights.
-1. Open the **Performance** blade in your Application Insights resource.
+1. Open the **Performance** pane in your Application Insights resource.
1. Once the trace process is complete, you'll see the Profiler Traces button like it below:
- :::image type="content" source="./media/profiler-containerinstances/profiler-traces.png" alt-text="Screenshot of Profile traces in the performance blade.":::
+ :::image type="content" source="./media/profiler-containerinstances/profiler-traces.png" alt-text="Screenshot of Profile traces in the performance pane.":::
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-upgrade.md
If you enabled the Snapshot debugger using the site extension, you can upgrade u
:::image type="content" source="./media/snapshot-debugger-upgrade/app-service-resource.png" alt-text="Screenshot of individual App Service resource named DiagService01.":::
-1. After you've navigated to your resource, click on the **Extensions** blade and wait for the list of extensions to populate:
+1. After you've navigated to your resource, click on the **Extensions** pane and wait for the list of extensions to populate:
:::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-to-be-deleted.png" alt-text="Screenshot of App Service Extensions showing Application Insights extension for Azure App Service installed.":::
If you enabled the Snapshot debugger using the site extension, you can upgrade u
:::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-site-extension-delete.png" alt-text="Screenshot of App Service Extensions showing Application Insights extension for Azure App Service with the Delete button highlighted.":::
-1. Go to the **Overview** blade of your resource and select **Application Insights**:
+1. Go to the **Overview** pane of your resource and select **Application Insights**:
:::image type="content" source="./media/snapshot-debugger-upgrade/application-insights-button.png" alt-text="Screenshot of three buttons. Center button with name Application Insights is selected.":::
-1. If this is the first time you've viewed the Application Insights blade for this App Service, you'll be prompted to turn on Application Insights. Select **Turn on Application Insights**.
+1. If this is the first time you've viewed the Application Insights pane for this App Service, you'll be prompted to turn on Application Insights. Select **Turn on Application Insights**.
- :::image type="content" source="./media/snapshot-debugger-upgrade/turn-on-application-insights.png" alt-text="Screenshot of the first-time experience for the Application Insights blade with the Turn on Application Insights button highlighted.":::
+ :::image type="content" source="./media/snapshot-debugger-upgrade/turn-on-application-insights.png" alt-text="Screenshot of the first-time experience for the Application Insights pane with the Turn on Application Insights button highlighted.":::
-1. In the Application Insights settings blade, switch the Snapshot Debugger setting toggles to **On** and select **Apply**.
+1. In the Application Insights settings pane, switch the Snapshot Debugger setting toggles to **On** and select **Apply**.
- If you decide to change *any* Application Insights settings, the **Apply** button on the bottom of the blade will be activated.
+ If you decide to change *any* Application Insights settings, the **Apply** button on the bottom of the pane will be activated.
:::image type="content" source="./media/snapshot-debugger-upgrade/view-application-insights-data.png" alt-text="Screenshot of Application Insights App Service Configuration page with Apply button highlighted in red.":::
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
When an exception occurs, you can automatically collect a debug snapshot from yo
Simply include the [Snapshot collector NuGet package](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector) in your application and configure collection parameters in [`ApplicationInsights.config`](../app/configuration-with-applicationinsights-config.md).
-Snapshots appear on [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights blade of the Azure portal.
+Snapshots appear on [**Exceptions**](../app/asp-net-exceptions.md) in the Application Insights pane of the Azure portal.
You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. To get a more powerful debugging experience with source code, open snapshots with Visual Studio Enterprise. You can also [set SnapPoints to interactively take snapshots](/visualstudio/debugger/debug-live-azure-applications) without waiting for an exception.
azure-monitor Workbooks Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-configurations.md
There are several ways that you can create interactive reports and experiences i
- **Parameters**: When you update a [parameter](workbooks-parameters.md), any control that uses the parameter automatically refreshes and redraws to reflect the new value. This behavior is how most of the Azure portal reports support interactivity. Workbooks provide this functionality in a straightforward manner with minimal user effort. - **Grid, tile, and chart selections**: You can construct scenarios where selecting a row in a grid updates subsequent charts based on the content of the row. For example, you might have a grid that shows a list of requests and some statistics like failure counts. You can set it up so that if you select the row of a request, the detailed charts below update to show only that request. Learn how to [set up a grid row click](#set-up-a-grid-row-click).
+ - **Grid cell clicks**: You can add interactivity with a special type of grid column renderer called a [link renderer](#link-renderer-actions). A link renderer converts a grid cell into a hyperlink based on the contents of the cell. Workbooks support many kinds of link renderers including renderers that open resource overview panes, property bag viewers, and Application Insights search, usage, and transaction tracing. Learn how to [set up a grid cell click](#set-up-grid-cell-clicks).
- **Conditional visibility**: You can make controls appear or disappear based on the values of parameters. This way you can have reports that look different based on user input or telemetry state. For example, you can show consumers a summary when there are no issues. You can also show detailed information when there's something wrong. Learn how to [set up conditional visibility](#set-conditional-visibility). - **Export parameters with multi-selections**: You can export parameters from query and metrics workbook components when a row or multiple rows are selected. Learn how to [set up multi-selects in grids and charts](#set-up-multi-selects-in-grids-and-charts).
azure-monitor Workbooks Link Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-link-actions.md
When you use the link renderer, the following settings are available:
|View to open| Allows you to select one of the actions enumerated above. | |Menu item| If **Resource Overview** is selected, this menu item is in the resource's overview. You can use it to open alerts or activity logs instead of the "overview" for the resource. Menu item values are different for each Azure Resource type.| |Link label| If specified, this value appears in the grid column. If this value isn't specified, the value of the cell appears. If you want another value to appear, like a heatmap or icon, don't use the link renderer. Instead, use the appropriate renderer and select the **Make this item a link** option. |
-|Open link in Context Blade| If specified, the link is opened as a pop-up "context" view on the right side of the window instead of opening as a full view. |
+|Open link in Context pane| If specified, the link is opened as a pop-up "context" view on the right side of the window instead of opening as a full view. |
When you use the **Make this item a link** option, the following settings are available:
When you use the **Make this item a link** option, the following settings are av
|Link value comes from| When a cell is displayed as a renderer with a link, this field specifies where the "link" value to be used in the link comes from. You can select from a dropdown of the other columns in the grid. For example, the cell might be a heatmap value. But perhaps you want the link to open the **Resource Overview** for the resource ID in the row. In that case, you would set the link value to come from the **Resource ID** field. |View to open| Same as above. | |Menu item| Same as above. |
-|Open link in Context Blade| Same as above. |
+|Open link in Context pane| Same as above. |
## Azure Resource Manager deployment link settings
This section defines where the template should come from and the parameters used
|:- |:-| |Resource group id comes from| The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value isn't specified, the deployment will fail. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources).| |ARM template URI from| The URI to the ARM template itself. The template URI needs to be accessible to the users who will deploy the template. Select from **Cell**, **Column**, **Parameter**, and **Static Value** in [Link sources](#link-sources). For more information, see [Azure quickstart templates](https://azure.microsoft.com/resources/templates/).|
-|ARM Template Parameters|Defines the template parameters used for the template URI defined earlier. These parameters are used to deploy the template on the run page. The grid contains an **Expand** toolbar button to help fill the parameters by using the names defined in the template URI and set to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select **Refresh** to update the preview with current changes. Parameters are typically values. References are something that could point to key vault secrets that the user has access to. <br/><br/> **Template Viewer blade limitation** doesn't render reference parameters correctly and will show up as null/value. As a result, users won't be able to correctly deploy reference parameters from the **Template Viewer** tab.|
+|ARM Template Parameters|Defines the template parameters used for the template URI defined earlier. These parameters are used to deploy the template on the run page. The grid contains an **Expand** toolbar button to help fill the parameters by using the names defined in the template URI and set to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select **Refresh** to update the preview with current changes. Parameters are typically values. References are something that could point to key vault secrets that the user has access to. <br/><br/> **Template Viewer pane limitation** doesn't render reference parameters correctly and will show up as null/value. As a result, users won't be able to correctly deploy reference parameters from the **Template Viewer** tab.|
![Screenshot that shows the Template Settings tab.](./media/workbooks-link-actions/template-settings.png)
azure-monitor Workbooks Renderers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-renderers.md
The following instructions show you how to use thresholds with links to assign i
1. Select the **Make this item a link** checkbox. - Under **View to open**, select **Workbook (Template)**. - Under **Link value comes from**, select **link**.
- - Select the **Open link in Context Blade** checkbox.
+ - Select the **Open link in Context pane** checkbox.
- Choose the following settings in **Workbook Link Settings**: - Under **Template Id comes from**, select **Column**. - Under **Column**, select **link**.
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 11/21/2022 Last updated : 11/22/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
* **AD Site Name (required)** This is the AD DS site name that will be used by Azure NetApp Files for domain controller discovery.
+ The default site name for both ADDS and AADDS is `Default-First-Site-Name`. Follow the [naming conventions for site names](/troubleshoot/windows-server/identity/naming-conventions-for-computer-domain-site-ou.md#site-names) if you want to rename the site name.
+ >[!NOTE] > See [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md). Ensure that your AD DS site design and configuration meets the requirements for Azure NetApp Files. Otherwise, Azure NetApp Files service operations, SMB authentication, Kerberos, or LDAP operations might fail.
azure-video-indexer Animated Characters Recognition How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/animated-characters-recognition-how-to.md
Follow these steps to connect your Custom Vision account to Azure Video Indexer,
1. Select **Connect Custom Vision Account** and select **Try it**. 1. Fill in the required fields and the access token and select **Send**.
- For more information about how to get the Video Indexer access token go to the [developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token), and see the [relevant documentation](video-indexer-use-apis.md#obtain-access-token-using-the-authorization-api).
+ For more information about how to get the Azure Video Indexer access token, go to the [developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token), and see the [relevant documentation](video-indexer-use-apis.md#obtain-access-token-using-the-authorization-api).
1. Once the call return 200 OK response, your account is connected. 1. To verify your connection by browse to the [Azure Video Indexer](https://vi.microsoft.com/) portal: 1. Select the **Content model customization** button in the top-right corner.
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Before the end of the 30 days of transition state, you can remove access from us
## Get started
-### Browse to [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
+### Browse to the [Azure Video Indexer website](https://aka.ms/vi-portal-link)
1. Sign in using your Azure AD account. 1. On the top right bar press *User account* to open the side pane account list.
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
If your storage account is behind a firewall, see [storage account that is behin
> [!NOTE] > Make sure to write down the Media Services resource and account names.
-1. Before you can play your videos in the Azure Video Indexer web app, you must start the default **Streaming Endpoint** of the new Media Services account.
+1. Before you can play your videos in the [Azure Video Indexer](https://www.videoindexer.ai/) website, you must start the default **Streaming Endpoint** of the new Media Services account.
In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start.
The following Azure Media Services related considerations apply:
![Media Services streaming endpoint](./media/create-account/ams-streaming-endpoint.png)
- Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the Azure Video Indexer web app.
+ Streaming endpoints have a considerable startup time. Therefore, it may take several minutes from the time you connected your account to Azure until your videos can be streamed and watched in the [Azure Video Indexer](https://www.videoindexer.ai/) website.
* If you connect to an existing Media Services account, Azure Video Indexer doesn't change the default Streaming Endpoint configuration. If there's no running **Streaming Endpoint**, you can't watch videos from this Media Services account or in Azure Video Indexer. ## Create a classic account
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
You need an Azure Media Services account. You can create one for free through [C
If you're new to Azure Video Indexer, see:
-* [Azure Video Indexer documentation](./index.yml)
-* [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/)
+* [The Azure Video Indexer documentation](./index.yml)
+* [The Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/)
After you complete this tutorial, head to other Azure Video Indexer samples described in [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md).
azure-video-indexer Deploy With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-bicep.md
In this tutorial, you create an Azure Video Indexer account by using [Bicep](../
> [!NOTE] > This sample is *not* for connecting an existing Azure Video Indexer classic account to an ARM-based Azure Video Indexer account.
-> For full documentation on Azure Video Indexer API, visit the [Developer portal](https://aka.ms/avam-dev-portal) page.
+> For full documentation on Azure Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal) page.
> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep). ## Prerequisites
Check [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-tem
If you're new to Azure Video Indexer, see:
-* [Azure Video Indexer Documentation](./index.yml)
-* [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/)
+* [The Azure Video Indexer documentation](./index.yml)
+* [The Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/)
* After completing this tutorial, head to other Azure Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md) If you're new to Bicep deployment, see:
azure-video-indexer Edit Transcript Lines Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/edit-transcript-lines-portal.md
This section shows how to examine word-level transcription information based on
## Next steps
-For updating transcript lines and text using API visit [Azure Video Indexer Developer portal](https://aka.ms/avam-dev-portal)
+For updating transcript lines and text using API visit the [Azure Video Indexer API developer portal](https://aka.ms/avam-dev-portal)
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
Review the following considerations.
To import your data, follow the steps:
- 1. Go to [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
+ 1. Go to the [Azure Video Indexer website](https://aka.ms/vi-portal-link)
2. Select your trial account and go to the **Account settings** page. 3. Click the **Import content to an ARM-based account**. 4. From the dropdown menu choose the ARM-based account you wish to import the data to.
azure-video-indexer Invite Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/invite-users.md
In addition to bringing up the **Share this account with others** dialog by clic
## Next steps
-You can now use the [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video.
+You can now use the [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video.
## See also
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
The following image shows the first flow:
|Video URL|Select **Web Url** from the dynamic content of **Create SAS URI by path** action.| | Body| Can be left as default.|
- ![Screenshot of the upload and index action.](./media/logic-apps-connector-arm-accounts/upload-and-index.png)
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/logic-apps-connector-arm-accounts/upload-and-index-expression.png" alt-text="Screenshot of the upload and index action." lightbox="./media/logic-apps-connector-arm-accounts/upload-and-index-expression.png":::
Select **Save**.
azure-video-indexer Manage Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/manage-multiple-tenants.md
When using this architecture, an Azure Video Indexer account is created for each
* Harder to manage due to multiple Azure Video Indexer (and associated Media Services) accounts per tenant. > [!TIP]
-> Create an admin user for your system in [Video Indexer Developer Portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
+> Create an admin user for your system in [the Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/) and use the Authorization API to provide your tenants the relevant [account access token](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Account-Access-Token).
## Single Azure Video Indexer account for all users
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
# NSG service tags for Azure Video Indexer
-Azure Video Indexer is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
+Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions).
+
+Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
## Get started with service tags
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#ap
### Configurations and parameters
-This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
+This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
#### externalID
After you copy the following code into your development platform, you'll need to
To get your API key:
- 1. Go to the [Azure Video Indexer portal](https://api-portal.videoindexer.ai/).
+ 1. Go to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
1. Sign in. 1. Go to **Products** > **Authorization** > **Authorization subscription**. 1. Copy the **Primary key** value.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 11/07/2022 Last updated : 11/22/2022
To stay up-to-date with the most recent Azure Video Indexer developments, this article provides you with information about:
-* [Important notice](#upcoming-critical-changes) about planned changes
+<!--* [Important notice](#upcoming-critical-changes) about planned changes-->
* The latest releases * Known issues * Bug fixes * Deprecated functionality
-## Upcoming critical changes
-
-> [!Important]
-> This section describes a critical upcoming change for the `Upload-Video` API.
-
-### Upload-Video API
-
-In the past, the `Upload-Video` API was tolerant to calls to upload a video from a URL where an empty multipart form body was provided in the C# code, such as:
-
-```csharp
-var content = new MultipartFormDataContent();
-var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", content);
-```
-
-In the coming weeks, our service will fail requests of this type.
-
-In order to upload a video from a URL, change your code to send null in the request body:
-
-```csharp
-var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null);
-```
- ## November 2022 ### Speakers' names can now be edited from the Azure Video Indexer website
For details, see [Slate detection](slate-detection-insight.md).
### New source languages support for STT, translation, and search
-Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in Azure Video Indexer web applications, widgets and APIs.
+Now supporting source languages for STT (speech-to-text), translation, and search in Ukraine and Vietnamese. It means transcription, translation, and search features are also supported for these languages in the [Azure Video Indexer](https://www.videoindexer.ai/) website, widgets and APIs.
For more information, see [supported languages](language-support.md).
For more information, see [Audio effects detection](audio-effects-detection.md).
### New source languages support for STT, translation, and search on the website Azure Video Indexer introduces source languages support for STT (speech-to-text), translation, and search in Hebrew (he-IL), Portuguese (pt-PT), and Persian (fa-IR) on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
-It means transcription, translation, and search features are also supported for these languages in Azure Video Indexer web applications and widgets.
+It means transcription, translation, and search features are also supported for these languages in the [Azure Video Indexer](https://www.videoindexer.ai/) website and widgets.
## December 2021
The Video Indexer service was renamed to Azure Video Indexer.
### Improved upload experience in the portal
-Azure Video Indexer has a new upload experience in the [portal](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab.
+Azure Video Indexer has a new upload experience in the [website](https://www.videoindexer.ai). To upload your media file, press the **Upload** button from the **Media files** tab.
### New developer portal in available in gov-cloud
-[Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
+The [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai) is now also available in Azure for US Government.
### Observed people tracing (preview)
The newly added bundle is available when indexing or re-indexing your file by ch
### New developer portal
-Azure Video Indexer has a new [Developer Portal](https://api-portal.videoindexer.ai/), try out the new Azure Video Indexer APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Azure Video Indexer tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Azure Video Indexer FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
+Azure Video Indexer has a new [developer portal](https://api-portal.videoindexer.ai/), try out the new Azure Video Indexer APIs and find all the relevant resources in one place: [GitHub repository](https://github.com/Azure-Samples/media-services-video-indexer), [Stack overflow](https://stackoverflow.com/questions/tagged/video-indexer), [Azure Video Indexer tech community](https://techcommunity.microsoft.com/t5/azure-media-services/bg-p/AzureMediaServices/label-name/Video%20Indexer) with relevant blog posts, [Azure Video Indexer FAQs](faq.yml), [User Voice](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858) to provide your feedback and suggest features, and ['CodePen' link](https://codepen.io/videoindexer) with widgets code samples.
### Advanced customization capabilities for insight widget
You can now create an Azure Video Indexer paid account in the East US region.
Azure Video Indexer regional endpoints were all unified to start only with www. No action item is required.
-From now on, you reach www.videoindexer.ai whether it is for embedding widgets or logging into Azure Video Indexer web applications.
+From now on, you reach www.videoindexer.ai whether it is for embedding widgets or logging into the [Azure Video Indexer](https://www.videoindexer.ai/) website.
Also wus.videoindexer.ai would be redirected to www. More information is available in [Embed Azure Video Indexer widgets in your apps](video-indexer-embed-widgets.md).
https://github.com/Azure-Samples/media-services-video-indexer
### Swagger update
-Azure Video Indexer unified **authentications** and **operations** into a single [Azure Video Indexer OpenAPI Specification (swagger)](https://api-portal.videoindexer.ai/api-details#api=Operations&operation). Developers can find the APIs in [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/).
+Azure Video Indexer unified **authentications** and **operations** into a single [Azure Video Indexer OpenAPI Specification (swagger)](https://api-portal.videoindexer.ai/api-details#api=Operations&operation). Developers can find the APIs in the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
## December 2019
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
This article shows how to upload and index videos by using the Azure Video Indexer website (see [get started with the website](video-indexer-get-started.md)) and the Upload Video API (see [get started with API](video-indexer-use-apis.md)).
-After you upload and index a video, you can use [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
+After you upload and index a video, you can use [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md).
## Supported file formats
You can use the [Upload Video](https://api-portal.videoindexer.ai/api-details#ap
### Configurations and parameters
-This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
+This section describes some of the optional parameters and when to set them. For the most up-to-date info about parameters, see the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video).
#### externalID
After you copy the following code into your development platform, you'll need to
To get your API key:
- 1. Go to the [Azure Video Indexer portal](https://api-portal.videoindexer.ai/).
+ 1. Go to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
1. Sign in. 1. Go to **Products** > **Authorization** > **Authorization subscription**. 1. Copy the **Primary key** value.
azure-video-indexer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-embed-widgets.md
If you embed Azure Video Indexer insights with your own [Azure Media Player](htt
### Cognitive Insights widget
-You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the API or from the web app): `&widgets=<list of wanted widgets>`.
+You can choose the types of insights that you want. To do this, specify them as a value to the following URL parameter that's added to the embed code that you get (from the [API](https://aka.ms/avam-dev-portal) or from the [Azure Video Indexer](https://www.videoindexer.ai/) website): `&widgets=<list of wanted widgets>`.
The possible values are: `people`, `animatedCharacters` , `keywords`, `labels`, `sentiments`, `emotions`, `topics`, `keyframes`, `transcript`, `ocr`, `speakers`, `scenes`, and `namedEntities`.
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
See the [input container/file formats](/azure/media-services/latest/encode-media
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/uploaded.png" alt-text="Uploaded the upload":::
-After you upload and index a video, you can continue using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
+After you upload and index a video, you can continue using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer API developer portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
## Start using insights
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
Azure Video Indexer makes an inference of main topics from transcripts. When pos
## Next steps
-Explore the [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai).
+Explore the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai).
For information about how to embed widgets in your application, see [Embed Azure Video Indexer widgets into your applications](video-indexer-embed-widgets.md).
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Before you start, see the [Recommendations](#recommendations) section (that foll
## Subscribe to the API
-1. Sign in to [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/).
+1. Sign in to the [Azure Video Indexer API developer portal](https://api-portal.videoindexer.ai/).
> [!Important] > * You must use the same provider you used when you signed up for Azure Video Indexer. > * Personal Google and Microsoft (Outlook/Live) accounts can only be used for trial accounts. Accounts connected to Azure require Azure AD. > * There can be only one active account per email. If a user tries to sign in with user@gmail.com for LinkedIn and later with user@gmail.com for Google, the latter will display an error page, saying the user already exists.
- ![Sign in to Azure Video Indexer Developer Portal](./media/video-indexer-use-apis/sign-in.png)
+ ![Sign in to the Azure Video Indexer API developer portal](./media/video-indexer-use-apis/sign-in.png)
1. Subscribe. Select the [Products](https://api-portal.videoindexer.ai/products) tab. Then, select **Authorization** and subscribe.
Before you start, see the [Recommendations](#recommendations) section (that foll
After you subscribe, you can find your subscription under **[Products](https://api-portal.videoindexer.ai/products)** -> **Profile**. In the subscriptions section, you'll find the primary and secondary keys. The keys should be protected. The keys should only be used by your server code. They shouldn't be available on the client side (.js, .html, and so on).
- ![Subscription and keys in Video Indexer Developer Portal](./media/video-indexer-use-apis/subscriptions.png)
+ ![Subscription and keys in the Azure Video Indexer API developer portal](./media/video-indexer-use-apis/subscriptions.png)
An Azure Video Indexer user can use a single subscription key to connect to multiple Azure Video Indexer accounts. You can then link these Azure Video Indexer accounts to different Media Services accounts.
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
Azure Backup provides several ways to restore a VM.
**Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
-**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
-**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs and Trusted Launch VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups).
+**Cross Subscription Restore (preview)** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Zonal Restore (preview)** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
>[!Tip] >To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Recovery points on DPM/MABS disk | 64 for file servers, and 448 for app servers.
**Create a new VM** | Quickly creates and gets a basic VM up and running from a restore point.<br/><br/> You can specify a name for the VM, select the resource group and virtual network (VNet) in which it will be placed, and specify a storage account for the restored VM. The new VM must be created in the same region as the source VM. **Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault, and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs and for VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) and [Key Vault](../key-vault/general/overview.md).
-**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the options below:<br> <li> [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> <li> [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
+**Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.
+**Cross Subscription (preview)** | Cross Subscription restore can be used to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+**Cross Zonal Restore (preview)** | Cross Zonal restore can be used to restore Azure zone pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore Disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Zonal Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore points. It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup).
+ ## Support for file-level restore
The following table summarizes support for backup during VM management tasks, su
**Restore** | **Supported** |
-<a name="backup-azure-cross-subscription-restore">Restore across subscription</a> | [Cross Subscription Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+<a name="backup-azure-cross-subscription-restore">Restore across subscription</a> | [Cross Subscription Restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
[Restore across region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported.
-<a name="backup-azure-cross-zonal-restore">Restore across zone</a> | [Cross Zonal Restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
+<a name="backup-azure-cross-zonal-restore">Restore across zone</a> | [Cross Zonal Restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs.
Restore to an existing VM | Use replace disk option. Restore disk with storage account enabled for Azure Storage Service Encryption (SSE) | Not supported.<br/><br/> Restore to an account that doesn't have SSE enabled. Restore to mixed storage accounts |Not supported.<br/><br/> Based on the storage account type, all restored disks will be either premium or standard, and not mixed.
batch Monitor Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/monitor-application-insights.md
Last updated 04/13/2021
[Application Insights](../azure-monitor/app/app-insights-overview.md) provides an elegant and powerful way for developers to monitor and debug applications deployed to Azure services. Use Application Insights to monitor performance counters and exceptions as well as instrument your code with custom metrics and tracing. Integrating Application Insights with your Azure Batch application allows you to gain deep insights into behaviors and investigate issues in near-real time.
-This article shows how to add and configure the Application Insights library into your Azure Batch .NET solution and instrument your application code. It also shows ways to monitor your application via the Azure portal and build custom dashboards. For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/platforms.md).
+This article shows how to add and configure the Application Insights library into your Azure Batch .NET solution and instrument your application code. It also shows ways to monitor your application via the Azure portal and build custom dashboards. For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/app-insights-overview.md#supported-languages).
A sample C# solution with code to accompany this article is available on [GitHub](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights). This example adds Application Insights instrumentation code to the [TopNWords](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/TopNWords) example. If you're not familiar with that example, try building and running TopNWords first. Doing this will help you understand a basic Batch workflow of processing a set of input blobs in parallel on multiple compute nodes.
Due to the large-scale nature of Azure Batch applications running in production,
## Next steps - Learn more about [Application Insights](../azure-monitor/app/app-insights-overview.md).-- For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/platforms.md).
+- For Application Insights support in other languages, see the [languages, platforms, and integrations documentation](../azure-monitor/app/app-insights-overview.md#supported-languages).
chaos-studio Sample Template Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-experiment.md
In this sample, we create a chaos experiment with a single target resource and a
{ "type": "Microsoft.Chaos/experiments", "apiVersion": "2021-09-15-preview",
- "name": "parameters('experimentName')",
- "location": "parameters('location')",
+ "name": "[parameters('experimentName')]",
+ "location": "[parameters('location')]",
"identity": { "type": "SystemAssigned" },
In this sample, we create a chaos experiment with a single target resource and a
"targets": [ { "type": "ChaosTarget",
- "id": "parameters('chaosTargetResourceId')"
+ "id": "[parameters('chaosTargetResourceId')]"
} ] }
chaos-studio Sample Template Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-targets.md
In this sample, we onboard an Azure Cosmos DB instance using [targets and capabi
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-CosmosDB/Failover-1.0')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-CosmosDB')]"
+ "[concat(resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-CosmosDB')]"
], "properties": {} }
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/NetworkChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/PodChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/StressChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/IOChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/TimeChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/KernelChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/DNSChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} },
In this sample, we onboard an Azure Kubernetes Service cluster using [targets an
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/HTTPChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
- "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName'), parameters('resourceGroup')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
+ "[concat(resourceId('Microsoft.ContainerService/managedClusters', parameters('resourceName')), '/', 'providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh')]"
], "properties": {} }
cognitive-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/find-similar-faces.md
Previously updated : 05/05/2022 Last updated : 11/07/2022
This guide demonstrates how to use the Find Similar feature in the different lan
This guide uses remote images that are accessed by URL. Save a reference to the following URL string. All of the images accessed in this guide are located at this URL path. ```
-"https://csdx.blob.core.windows.net/resources/Face/media/"
+https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/
``` ## Detect faces for comparison
cognitive-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/mitigate-latency.md
Previously updated : 1/5/2021 Last updated : 11/07/2021 ms.devlang: csharp
The Face service must then download the image from the remote server. If the con
To mitigate this situation, consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example: ``` csharp
-var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+var faces = await client.Face.DetectWithUrlAsync("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Daughter1.jpg");
``` ### Large upload size
If the file to upload is large, that will impact the response time of the `Detec
Mitigations: - Consider [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). For example: ``` csharp
-var faces = await client.Face.DetectWithUrlAsync("https://csdx.blob.core.windows.net/resources/Face/Images/Family1-Daughter1.jpg");
+var faces = await client.Face.DetectWithUrlAsync("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Daughter1.jpg");
``` - Consider uploading a smaller file. - See the guidelines regarding [input data for face detection](../concept-face-detection.md#input-data) and [input data for face recognition](../concept-face-recognition.md#input-data).
cognitive-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-identity.md
Previously updated : 06/13/2022 Last updated : 11/06/2022 keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
This documentation contains the following types of articles:
* The [conceptual articles](./concept-face-detection.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./enrollment-overview.md) are longer guides that show you how to use this service as a component in broader business solutions.
-For a more structured approach, follow a Learn module for Face.
+For a more structured approach, follow a Training module for Face.
* [Detect and analyze faces with the Face service](/training/modules/detect-analyze-faces/) ## Example use cases
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-image-analysis.md
Previously updated : 11/03/2022 Last updated : 11/06/2022 keywords: computer vision, computer vision applications, computer vision service
This documentation contains the following types of articles:
* The [conceptual articles](concept-tagging-images.md) provide in-depth explanations of the service's functionality and features. * The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.
-For a more structured approach, follow a Learn module for Image Analysis.
+For a more structured approach, follow a Training module for Image Analysis.
* [Analyze images with the Computer Vision service](/training/modules/analyze-images-computer-vision/) ## Image Analysis features
cognitive-services Vehicle Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/vehicle-analysis.md
Previously updated : 09/28/2022 Last updated : 11/07/2022
Vehicle analysis is a set of capabilities that, when used with the Spatial Analy
* To utilize the operations of vehicle analysis, you must first follow the steps to [install and run spatial analysis container](./spatial-analysis-container.md) including configuring your host machine, downloading and configuring your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, executing the deployment, and setting up device [logging](spatial-analysis-logging.md). * When you configure your [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest.json) file, refer to the steps below to add the graph configurations for vehicle analysis to your manifest prior to deploying the container. Or, once the spatial analysis container is up and running, you may add the graph configurations and follow the steps to redeploy. The steps below will outline how to properly configure your container.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
> [!NOTE] > Make sure that the edge device has at least 50GB disk space available before deploying the Spatial Analysis module.
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CSHARP&Pillar=Vision&Product=spatial-analysis&Page=howto&Section=prerequisites" target="_target">I ran into an issue</a>
+ ## Vehicle analysis operations Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis will generate an output stream of JSON messages that are being sent to your instance of Azure IoT Hub.
Below is the graph optimized for the **vehicle in polygon** operation, utilized
} ```
+> [!div class="nextstepaction"]
+> <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=CSHARP&Pillar=Vision&Product=spatial-analysis&Page=howto&Section=configuring-the-vehicle-analysis-operations" target="_target">I ran into an issue</a>
+ ## Sample cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview output The JSON below demonstrates an example of the vehicle count operation graph output.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/overview.md
Previously updated : 09/29/2021 Last updated : 11/06/2021 keywords: content moderator, azure content moderator, online moderator, content filtering software, content moderation service, content moderation
This documentation contains the following article types:
* [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features. * [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions.
-For a more structured approach, follow a Learn module for Content Moderator.
+For a more structured approach, follow a Training module for Content Moderator.
* [Introduction to Content Moderator](/training/modules/intro-to-content-moderator/) * [Classify and moderate text with Azure Content Moderator](/training/modules/classify-and-moderate-text-with-azure-content-moderator/)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
Previously updated : 07/20/2022 Last updated : 11/06/2022 keywords: image recognition, image identifier, image recognition app, custom vision
This documentation contains the following types of articles:
* The [tutorials](./iot-visual-alerts-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions. <!--* The [conceptual articles](Vision-API-How-to-Topics/call-read-api.md) provide in-depth explanations of the service's functionality and features.-->
-For a more structured approach, follow a Learn module for Custom Vision:
+For a more structured approach, follow a Training module for Custom Vision:
* [Classify images with the Custom Vision service](/training/modules/classify-images-custom-vision/) * [Classify endangered bird species with Custom Vision](/training/modules/cv-classify-bird-species/)
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
You can use the following REST API operations for batch synthesis:
| List batch synthesis | `GET` | texttospeech/3.1-preview1/batchsynthesis | | Delete batch synthesis | `DELETE` | texttospeech/3.1-preview1/batchsynthesis/{id} |
+For code samples, see [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch-synthesis).
+ ## Create batch synthesis To submit a batch synthesis request, construct the HTTP POST request body according to the following instructions:
cognitive-services Extract Excel Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/tutorials/extract-excel-information.md
Previously updated : 07/27/2022 Last updated : 11/21/2022
In this tutorial, you'll learn how to:
- A Microsoft Azure account. [Create a free account](https://azure.microsoft.com/free/cognitive-services/) or [sign in](https://portal.azure.com/). - A Language resource. If you don't have one, you can [create one in the Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) and use the free tier to complete this tutorial. - The [key and endpoint](../../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) that was generated for you during sign-up.-- A spreadsheet containing tenant issues. Example data is provided on GitHub-- Microsoft 365, with OneDrive for business.
+- A spreadsheet containing tenant issues. Example data for this tutorial is [available on GitHub](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/TextAnalytics/sample-data/ReportedIssues.xlsx).
+- Microsoft 365, with [OneDrive for business](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business).
## Add the Excel file to OneDrive for Business
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/what-is-personalizer.md
ms.
Previously updated : 07/06/2022 Last updated : 11/17/2022 keywords: personalizer, Azure personalizer, machine learning # What is Personalizer?
-Azure Personalizer helps your applications make smarter decisions at scale using **reinforcement learning**. Personalizer can determine the best actions to take in a variety of scenarios:
+Azure Personalizer is an AI service that your applications make smarter decisions at scale using **reinforcement learning**. Personalizer processes information about the state of your application, scenario, and/or users (*contexts*), and a set of possible decisions and related attributes (*actions*) to determine the best decision to make. Feedback from your application (*rewards*) is sent to Personalizer to learn how to improve its decision-making ability in near-real time.
+
+Personalizer can determine the best actions to take in a variety of scenarios:
* E-commerce: What product should be shown to customers to maximize the likelihood of a purchase? * Content recommendation: What article should be shown to increase the click-through rate? * Content design: Where should an advertisement be placed to optimize user engagement on a website? * Communication: When and how should a notification be sent to maximize the chance of a response?
-Personalizer processes information about the state of your application, scenario, and/or users (*contexts*), and a set of possible decisions and related attributes (*actions*) to determine the best decision to make. Feedback from your application (*rewards*) is sent to Personalizer to learn how to improve its decision-making ability in near-real time.
-
-To get started with the Personalizer, follow the [**quickstart guide**](quickstart-personalizer-sdk.md), or try Personalizer with this [interactive demo](https://personalizerdevdemo.azurewebsites.net/).
-
+To get started with the Personalizer, follow the [**quickstart guide**](quickstart-personalizer-sdk.md), or try Personalizer in your browser with this [interactive demo](https://personalizerdevdemo.azurewebsites.net/).
This documentation contains the following types of articles:
Personalizer uses reinforcement learning to select the best *action* for a given
Personalizer empowers you to take advantage of the power and flexibility of reinforcement learning using just two primary APIs.
-The **Rank** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) is called by your application each time there is a decision to be made. The application sends a JSON containing a set of actions, features that describe each action, and features that describe the current context. Each Rank API call is known as an **event** and noted with a unique _event ID_. Personalizer then returns the ID of the best action that maximizes the total average reward as determined by the underlying model.
+The **Rank** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) is called by your application each time there's a decision to be made. The application sends a JSON containing a set of actions, features that describe each action, and features that describe the current context. Each Rank API call is known as an **event** and noted with a unique _event ID_. Personalizer then returns the ID of the best action that maximizes the total average reward as determined by the underlying model.
-The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there is feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to then Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined your business metrics and objectives, and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
+The **Reward** [API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward) is called by your application whenever there's feedback that can help Personalizer learn if the action ID returned in the *Rank* call provided value. For example, if a user clicked on the suggested news article, or completed the purchase of a suggested product. A call to then Reward API can be in real-time (just after the Rank call is made) or delayed to better fit the needs of the scenario. The reward score is determined your business metrics and objectives, and can be generated by an algorithm or rules in your application. The score is a real-valued number between 0 and 1.
### Learning modes
-* **[Apprentice mode](concept-apprentice-mode.md)** Similar to how an apprentice learns a craft from observing an expert, Apprentice mode enables Personalizer to learn by observing your application's current decision logic. This helps to mitigate the so-called "cold start" problem with a new untrained model, and allows you to validate the action and context features that are sent to Personalizer. In Apprentice mode, each call to the Rank API returns the _baseline action_ or _default action_, that is the action that the application would've taken without using Personalizer. This is sent by your application to Personalizer in the Rank API as the first item in the set of possible actions.
+* **[Apprentice mode](concept-apprentice-mode.md)** Similar to how an apprentice learns a craft from observing an expert, Apprentice mode enables Personalizer to learn by observing your application's current decision logic. This helps to mitigate the so-called "cold start" problem with a new untrained model, and allows you to validate the action and context features that are sent to Personalizer. In Apprentice mode, each call to the Rank API returns the _baseline action_ or _default action_ that is the action that the application would have taken without using Personalizer. This is sent by your application to Personalizer in the Rank API as the first item in the set of possible actions.
* **Online mode** Personalizer will return the best action, given the context, as determined by the underlying RL model and explores other possible actions that may improve performance. Personalizer learns from feedback provided in calls to the Reward API.
Note that Personalizer uses collective information across all users to learn the
* Log individual users' preferences or historical data.
-### Example scenarios
+## Example scenarios
Here are a few examples where Personalizer can be used to select the best content to render for a user.
Here are a few examples where Personalizer can be used to select the best conten
Use Personalizer when your scenario has:
-* A limited set of actions or items to select from in each personalization event. We recommend no more than ~50 actions in each Rank API call. If you have a larger set of possible actions, we suggest using a [using a recommendation engine].(where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) or another mechanism to reduce the list of actions prior to calling the Rank API.
+* A limited set of actions or items to select from in each personalization event. We recommend no more than ~50 actions in each Rank API call. If you have a larger set of possible actions, we suggest [using a recommendation engine](where-can-you-use-personalizer.md#how-to-use-personalizer-with-a-recommendation-solution) or another mechanism to reduce the list of actions prior to calling the Rank API.
* Information describing the actions (_action features_). * Information describing the current context (_contextual features_). * Sufficient data volume to enable Personalizer to learn. In general, we recommend a minimum of ~1,000 events per day to enable Personalizer learn effectively. If Personalizer doesn't receive sufficient data, the service takes longer to determine the best actions.
+## Responsible use of AI
+At Microsoft, we're committed to the advancement of AI driven by principles that put people first. AI models such as the ones available in the Personalizer service have significant potential benefits,
+but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, incorporating [MicrosoftΓÇÖs principles for responsible AI use](https://www.microsoft.com/ai/responsible-ai), building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers. See the [Responsible AI docs for Personalizer](responsible-use-cases.md).
-## Integrating Personalizer in an application
+## Integrate Personalizer into an application
-1. [Design](concepts-features.md) and plan the **_actions_**, and **_context_**. Determine the how to interpret feedback as a **_reward_** score.
-1. Each [Personalizer Resource](how-to-settings.md) you create is defined as one _Learning Loop_. The loop will receive the both the Rank and Reward calls for that content or user experience and train an underlying RL model. There are
+1. [Design](concepts-features.md) and plan the **_actions_**, and **_context_**. Determine how to interpret feedback as a **_reward_** score.
+1. Each [Personalizer Resource](how-to-settings.md) you create is defined as one _Learning Loop_. The loop will receive both the Rank and Reward calls for that content or user experience and train an underlying RL model. There are
|Resource type| Purpose| |--|--|
Use Personalizer when your scenario has:
1. Add Personalizer to your application, website, or system: 1. Add a **Rank** call to Personalizer in your application, website, or system to determine the best action.
- 1. Use the the best action, as specified as a _reward action ID_ in your scenario.
+ 1. Use the best action, as specified as a _reward action ID_ in your scenario.
1. Apply _business logic_ to user behavior or feedback data to determine the **reward** score. For example:
- |Behavior|Calculated reward score|
- |--|--|
- |User selected a news article suggested by Personalizer |**1**|
- |User selected a news article _not_ suggested by Personalizer |**0**|
- |User hesitated to select a news article, scrolled around indecisively, and ultimately selected the news article suggested by Personalizer |**0.5**|
+ |Behavior|Calculated reward score|
+ |--|--|
+ |User selected a news article suggested by Personalizer |**1**|
+ |User selected a news article _not_ suggested by Personalizer |**0**|
+ |User hesitated to select a news article, scrolled around indecisively, and ultimately selected the news article suggested by Personalizer |**0.5**|
1. Add a **Reward** call sending a reward score between 0 and 1 * Immediately after feedback is received. * Or sometime later in scenarios where delayed feedback is expected. 1. Evaluate your loop with an [offline evaluation](concepts-offline-evaluation.md) after a period of time when Personalizer has received significant data to make online decisions. An offline evaluation allows you to test and assess the effectiveness of the Personalizer Service without code changes or user impact.
-## Reference
-
-* [Personalizer C#/.NET SDK](/dotnet/api/overview/azure/cognitiveservices/client/personalizer)
-* [Personalizer Go SDK](https://github.com/Azure/azure-sdk-for-go/tree/master/services/preview)
-* [Personalizer JavaScript SDK](/javascript/api/@azure/cognitiveservices-personalizer/)
-* [Personalizer Python SDK](/python/api/overview/azure/cognitiveservices/personalizer)
-* [REST APIs](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank)
## Next steps > [!div class="nextstepaction"]
-> [How Personalizer works](how-personalizer-works.md)
-> [What is Reinforcement Learning?](concepts-reinforcement-learning.md)
+> [Personalizer quickstart](quickstart-personalizer-sdk.md)
+
+* [How Personalizer works](how-personalizer-works.md)
+* [What is Reinforcement Learning?](concepts-reinforcement-learning.md)
communication-services Simulcast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/simulcast.md
+
+ Title: Azure Communication Services Simulcast
+
+description: Overview of Simulcast
+++++ Last updated : 11/21/2022+++
+# Simulcast
+Simulcast is provided as a preview for developers and may change based on feedback that we receive. To use this feature, use 1.9.1-beta.1+ release of Azure Communication Services Calling Web SDK. Currently, we support simulcast send from desktop chrome and desktop edge. Simulcast send from mobile devices will be available shortly in the future.
+
+Simulcast is a technique by which an endpoint encodes the same video feed using different qualities, sends these video feeds of multiple qualities to a selective forwarding unit ΓÇô SFU that decides which of the receivers gets which quality.
+The lack of simulcast support leads to a degraded video experience in calls with three or more participants. If a video receiver with poor network conditions joins the conference, it will impact the quality of video received from the sender without simulcast support for all other participants. This is because the video sender will optimize its video feed against the lowest common denominator. With simulcast, the impact of lowest common denominator will be minimized. That is because the video sender will produce specialized low fidelity video encoding for a subset of receivers that run on poor networks (or otherwise constrained).
+## Scenarios where simulcast is useful
+- Users with unknown bandwidth constraints joining. When a new joiner joins the call, its bandwidth conditions are unknown when starting to receive video. It will not be sent high quality content before reliable estimation of its bandwidth is known to prevent overshooting the available bandwidth. In unicast, if everyone was receiving high quality content, then that would cause degradation for every other receiver until the reliable estimate of the bandwidth conditions can be achieved. In simulcast, lower resolution video can be sent to the new joiner until itsΓÇÖ bandwidth conditions are known while other keep receiving high quality video.
+In a similar way, if one of the receivers is on poor network, video quality of all other receivers on good network will be degraded to accommodate for the receiver on poor network in unicast. But in simulcast, lower resolution/bitrate content can be sent to the receiver on poor network and higher resolution/bitrate content can be sent to receivers on good network.
+- In content sharing, where thumbnails are often used for video content, lower resolution videos are requested from the producers. If in parallel zooming of someoneΓÇÖs video is needed, zoomed video will be low quality to prevent others looking at the content not to receive both content and video at high quality thus wasting bandwidth.
+- When video is sent to a receiver who has a larger view(like a desktop receiver. On desktop, videos are usually rendered on big views) than another receiver who has a smaller view(like a mobile receiver. Mobile screens are usually small). With simulcast, the quality of the larger view will not be affected by the quality of the smaller view. Sender will send a high resolution to the larger view receiver and a smaller resolution to the smaller view receiver.
+
+## How it's used/works
+Simulcast is adaptively enabled on-demand to save bandwidth and CPU resources of the publisher.
+Subscribers notify SFU of its maximum resolution preference based on the size of the renderer element.
+SFU tracks the bandwidth conditions and resolution requirements of all current subscribers to the publisherΓÇÖs video and forwards the aggregated parameters of all subscribers to the publisher. Publisher will pick the best set of parameters to give optimal quality to all receivers considering all publisherΓÇÖs and subscribersΓÇÖ constraints.
+SFU will receive multiple qualities of the content and will choose the quality to forward to the subscriber. There will be no transcoding of the content on the SFU. SFU won't forward higher resolution than requested by the subscriber.
+## Limitations
+Web endpoints support simulcast only for video content with maximum two distinct qualities.
+## Resolutions
+In adaptive simulcast, there are no set resolutions for high- and low-quality video streams. Optimal set of either single or multiple streams are chosen. If every subscriber to video is requesting and capable of receiving maximum resolution what publisher can provide, only that maximum resolution will be sent.
+Following resolutions are supported and requested by the receivers in web simulcast ΓÇô 180p, 240p, 360p, 540p, 720p.
+In limited input resolution, resolution received will be capped at that resolution.
+In simulcast, effective resolution sent can be also degraded internally, thus actual received resolution of video can vary.
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
# Add a bot to your chat app > [!IMPORTANT]
-> This functionality is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/HBm8jRuuGZ) and we will review your scenario(s) and evaluate your participation in the preview.
+> This functionality is in public preview.
>
-> Private Preview APIs and SDKs are provided without a service-level agreement, and are not appropriate for production workloads and should only be used with test users and test data. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-- In this quickstart, you will learn how to build conversational AI experiences in a chat application using Azure Communication Services Chat messaging channel that is available under Azure Bot Services. This article will describe how to create a bot using BotFramework SDK and how to integrate this bot into any chat application that is built using Communication Services Chat SDK.
Sometimes the bot wouldn't be able to understand or answer a question or a custo
## Handling bot to bot communication
- There may be certain usecases where two bots need to be added to the same thread. If this occurs, then the bots may start replying to each other's messages. If this scenario is not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. This scenario is handled by Azure Communication Services Chat by throttling the requests which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](/azure/communication-services/concepts/service-limits#chat).
+ There may be certain use cases where two bots need to be added to the same chat thread to provide different services. In such use cases, you may need to ensure that bots don't start sending automated replies to each other's messages. If not handled properly, the bots' automated interaction between themselves may result in an infinite loop of messages. You can verify the ACS user identity of the sender of a message from the activity's `From.Id` field to see if it belongs to another bot and take required action to prevent such a communication flow. If such a scenario results in high call volumes, then Azure Communication Services Chat channel will start throttling the requests, which will result in the bot not being able to send and receive the messages. You can learn more about the [throttle limits](/azure/communication-services/concepts/service-limits#chat).
## Troubleshooting
communication-services Chat Android Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-android-push-notification.md
Push notifications let clients be notified for incoming messages and other opera
11. Add a custom `WorkManager` initializer by creating a class implementing `Configuration.Provider`: ```java
-public class MyAppConfiguration extends Application implements Configuration.Provider {
- Consumer<Throwable> exceptionHandler = new Consumer<Throwable>() {
+ public class MyAppConfiguration extends Application implements Configuration.Provider {
+ Consumer<Throwable> exceptionHandler = new Consumer<Throwable>() {
+ @Override
+ public void accept(Throwable throwable) {
+ Log.i("YOUR_TAG", "Registration failed for push notifications!" + throwable.getMessage());
+ }
+ };
+
@Override
- public void accept(Throwable throwable) {
- Log.i("YOUR_TAG", "Registration failed for push notifications!" + throwable.getMessage());
+ public void onCreate() {
+ super.onCreate();
+ // Initialize application parameters here
+ WorkManager.initialize(getApplicationContext(), getWorkManagerConfiguration());
+ }
+
+ @NonNull
+ @Override
+ public Configuration getWorkManagerConfiguration() {
+ return new Configuration.Builder().
+ setWorkerFactory(new RegistrationRenewalWorkerFactory(COMMUNICATION_TOKEN_CREDENTIAL, exceptionHandler)).build();
}
- };
- @Override
- public void onCreate() {
- super.onCreate();
- WorkManager.initialize(getApplicationContext(), getWorkManagerConfiguration());
- }
- @NonNull
- @Override
- public Configuration getWorkManagerConfiguration() {
- return new Configuration.Builder().
- setWorkerFactory(new RegistrationRenewalWorkerFactory(COMMUNICATION_TOKEN_CREDENTIAL, exceptionHandler)).build();
}
-}
```
+**Explanation to code above:** The default initializer of `WorkManager` has been disabled in step 9. This step implements `Configuration.Provider` to provide a customized 'WorkFactory', which is responsible to create `WorkerManager` during runtime.
+
+If the app is integrated with Azure Function, initialization of application parameters should be added in method 'onCreate()'. Method 'getWorkManagerConfiguration()' is called when the application is starting, before any activity, service, or receiver objects (excluding content providers) have been created, so that application parameters could be initialized before being used. More details can be found in the sample chat app.
12. Add the `android:name=.MyAppConfiguration` field, which uses the class name from step 11, into `AndroidManifest.xml`:
communication-services Integrate Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/integrate-azure-function.md
+
+ Title: Enable Azure Function in chat app
+
+description: Learn how to enable Azure Function
+++ Last updated : 11/03/2022++++
+# Integrate Azure Function
+## Introduction
+This tutorial provides detailed guidance on how to set up an Azure Function to receive user-related information. Setting up an Azure Function is highly recommended. It helps to avoid hard-coding application parameters in the Contoso app (such as user ID and user token). This information is highly confidential. More importantly, we refresh user tokens periodically on the backend. Hard-coding the user ID and token combination requires editing the value after every refresh.
+
+## Prerequisites
+
+Before you get started, make sure to:
+
+- Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Install Visual Studio Code.
+
+## Setting up functions
+1. Install the Azure Function extension in Visual Studio Code. You can install it from Visual Studio Code's plugin browser or by following [this link](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions)
+2. Set up a local Azure Function app by following [this link](../../azure-functions/functions-develop-vs-code.md?tabs=csharp#create-an-azure-functions-project). We need to create a local function using the HTTP trigger template in JavaScript.
+3. Install Azure Communication Services libraries. We'll use the Identity library to generate User Access Tokens. Run the npm install command in your local Azure Function app directory, to install the Azure Communication Services Identity SDK for JavaScript.
+
+```
+ npm install @azure/communication-identity --save
+```
+4. Modify the `index.js` file so it looks like the code below:
+```JavaScript
+ const { CommunicationIdentityClient } = require('@azure/communication-identity');
+ const connectionString = '<your_connection_string>'
+ const acsEndpoint = "<your_ACS_endpoint>"
+
+ module.exports = async function (context, req) {
+ let tokenClient = new CommunicationIdentityClient(connectionString);
+
+ const userIDHolder = await tokenClient.createUser();
+ const userId = userIDHolder.communicationUserId
+
+ const userToken = await (await tokenClient.getToken(userIDHolder, ["chat"])).token;
+
+ context.res = {
+ body: {
+ acsEndpoint,
+ userId,
+ userToken
+ }
+ };
+ }
+```
+**Explanation to code above**: The first line import the interface for the `CommunicationIdentityClient`. The connection string in the second line can be found in your ACS resource in the Azure portal. The `ACSEndpoint` is the URL of the ACS resource that was created.
+
+5. Open the local Azure Function folder in Visual Studio Code. Open the `index.js` and run the local Azure Function. A local Azure Function endpoint will be created and printed in the terminal. The printed message looks similar to:
+
+```
+Functions:
+HttpTrigger1: [GET,POST] http://localhost:7071/api/HttpTrigger1
+```
+
+Open the link in a browser. The result will be similar to this example:
+```
+ {
+ "acsEndpoint": "<Azure Function endpoint>",
+ "userId": "8:acs:a636364c-c565-435d-9818-95247f8a1471_00000014-f43f-b90f-9f3b-8e3a0d00c5d9",
+ "userToken": "eyJhbGciOiJSUzI1NiIsImtpZCI6IjEwNiIsIng1dCI6Im9QMWFxQnlfR3hZU3pSaXhuQ25zdE5PU2p2cyIsInR5cCI6IkpXVCJ9.eyJza3lwZWlkIjoiYWNzOmE2MzYzNjRjLWM1NjUtNDM1ZC05ODE4LTk1MjQ3ZjhhMTQ3MV8wMDAwMDAxNC1mNDNmLWI5MGYtOWYzYi04ZTNhMGQwMGM1ZDkiLCJzY3AiOjE3OTIsImNzaSI6IjE2Njc4NjI3NjIiLCJleHAiOjE2Njc5NDkxNjIsImFjc1Njb3BlIjoiY2hhdCIsInJlc291cmNlSWQiOiJhNjM2MzY0Yy1jNTY1LTQzNWQtOTgxOC05NTI0N2Y4YTE0NzEiLCJyZXNvdXJjZUxvY2F0aW9uIjoidW5pdGVkc3RhdGVzIiwiaWF0IjoxNjY3ODYyNzYyfQ.t-WpaUUmLJaD0V2vgn3M5EKdJUQ_JnR2jnBUZq3J0zMADTnFss6TNHMIQ-Zvsumwy14T1rpw-1FMjR-zz2icxo_mcTEM6hG77gHzEgMR4ClGuE1uRN7O4-326ql5MDixczFeIvIG8s9kAeJQl8N9YjulvRkGS_JZaqMs2T8Mu7qzdIOiXxxlmcl0HeplxLaW59ICF_M4VPgUYFb4PWMRqLXWjKyQ_WhiaDC3FvhpE_Bdb5U1eQXDw793V1_CRyx9jMuOB8Ao7DzqLBQEhgNN3A9jfEvIE3gdwafpBWlQEdw-Uuf2p1_xzvr0Akf3ziWUsVXb9pxNlQQCc19ztl3MIQ"
+ }
+```
+
+6. Deploy the local function to the cloud. More details can be found in [this documentation](../../azure-functions/functions-develop-vs-code.md).
+
+7. **Test the deployed Azure Function.** First, find your Azure Function in the Azure portal. Then, use the "Get Function URL" button to get the Azure Function endpoint. The result you see should be similar to what was shown in step 5. The Azure Function endpoint will be used in the app for initializing application parameters.
+
+8. Implement `UserTokenClient`, which is used to call the target Azure Function resource and obtain the ACS endpoint, user ID and user token from the returned JSON object. Refer to the sample app for usage.
+
+## Troubleshooting guide
+1. If the Azure Function extension failed to deploy the local function to the Azure cloud, it's likely due to the version of Visual Studio Code and the Azure Function extension being used having a bug. This version combination has been tested to work: Visual Studio Code version `1.68.1` and Azure Function extension version `1.2.1`.
+2. The place to initialize application constants is tricky but important. Double check the [chat Android quick-start](https://learn.microsoft.com/azure/communication-services/quickstarts/chat/get-started). More specifically, the highlight note in the section "Set up application constants", and compare with the sample app of the version you are consuming.
+
+## (Optional) secure the Azure Function endpoint
+For demonstration purposes, this sample uses a publicly accessible endpoint by default to fetch an Azure Communication Services token. For production scenarios, one option is to use your own secured endpoint to provision your own tokens.
+
+With extra configuration, this sample supports connecting to an Azure Active Directory (Azure AD) protected endpoint so that user log is required for the app to fetch an Azure Communication Services token. The user will be required to sign in with Microsoft account to access the app. This setup increases the security level while users are required to log in. Decide whether to enable it based on the use cases.
+
+Note that we currently don't support Azure AD in sample code. Follow the links below to enable it in your app and Azure Function:
+
+[Register your app under Azure Active Directory (using Android platform settings)](../../active-directory/develop/tutorial-v2-android.md).
+
+[Configure your App Service or Azure Functions app to use Azure AD log in](../../app-service/configure-authentication-provider-aad.md).
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
IP addresses are broken down into the following types:
| Type | Description | |--|--| | Public inbound IP address | Used for app traffic in an external deployment, and management traffic in both internal and external deployments. |
-| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. |
+| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound traffic from a Container App environment is not supported. |
| Internal load balancer IP address | This address only exists in an internal deployment. | | App-assigned IP-based TLS/SSL addresses | These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured. |
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
Previously updated : 10/25/2022 Last updated : 11/21/2022
The following quotas are on a per subscription basis for Azure Container Apps.
-To request an increase in quota amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
+To request an increase in quota amounts for your container app, learn [how to request a limit increase](faq.yml#how-can-i-request-a-quota-increase-) and [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
| Feature | Scope | Default | Is Configurable<sup>1</sup> | Remarks | |--|--|--|--|--|
-| Environments | Region | 5 | Yes | |
+| Environments | Region | Up to 5 | Yes | Limit up to five environments per subscription, per region.<br><br>For example, if you deploy to three regions you can get up to 15 environments for a single subscription. |
| Container Apps | Environment | 20 | Yes | | | Revisions | Container app | 100 | No | | | Replicas | Revision | 30 | Yes | | | Cores | Replica | 2 | No | Maximum number of cores that can be requested by a revision replica. | | Cores | Environment | 20 | Yes | Maximum number of cores an environment can accommodate. Calculated by the sum of cores requested by each active replica of all revisions in an environment. |
-<sup>1</sup> The **Is Configurable** column denotes that a feature maximum may be increased through a [support request](https://azure.microsoft.com/support/create-ticket/).
+For more information regarding quotas, see the [Quotas Roadmap](https://github.com/microsoft/azure-container-apps/issues/503) in the Azure Container Apps GitHub repository.
+
+> [!NOTE]
+> [Free trial](https://azure.microsoft.com/offers/ms-azr-0044p) and [Azure for Students](https://azure.microsoft.com/free/students/) subscriptions are limited to one environment per subscription globally.
+
+<sup>1</sup> The **Is Configurable** column denotes that a feature maximum may be increased through a [support request](https://azure.microsoft.com/support/create-ticket/). For more information, see [how to request a limit increase](faq.yml#how-can-i-request-a-quota-increase-).
## Considerations * If an environment runs out of allowed cores: * Provisioning times out with a failure
- * The app silently refuses to scale out
+ * The app may be restricted from scaling out
container-instances Container Instances Init Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-init-container.md
This article shows how to use an Azure Resource Manager template to configure a
* **Order of execution** - Init containers are executed in the order specified in the template, and before other containers. By default, you can specify a maximum of 59 init containers per container group. At least one non-init container must be in the group. * **Host environment** - Init containers run on the same hardware as the rest of the containers in the group. * **Resources** - You don't specify resources for init containers. They are granted the total resources such as CPUs and memory available to the container group. While an init container runs, no other containers run in the group.
-* **Supported properties** - Init containers can use group properties such as volumes, secrets, and managed identities. However, they can't use ports or an IP address if configured for the container group.
+* **Supported properties** - Init containers can use some group properties such as volumes and secrets. However, they can't use ports, IP address and managed identities if configured for the container group.
* **Restart policy** - Each init container must exit successfully before the next container in the group starts. If an init container doesn't exit successfully, its restart action depends on the [restart policy](container-instances-restart-policy.md) configured for the group: |Policy in group |Policy in init |
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
This article shows how to configure a private endpoint for your registry using t
[!INCLUDE [container-registry-scanning-limitation](../../includes/container-registry-scanning-limitation.md)] > [!NOTE]
-> Starting October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the az acr show-usage command to see the limit for your registry. Please open a support ticket if this limit needs to be increased to 200 private endpoints.
+> Starting from October 2021, new container registries allow a maximum of 200 private endpoints. Registries created earlier allow a maximum of 10 private endpoints. Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command to see the limit for your registry. Please open a support ticket if the maximum limit of private endpoints increases to 200.
## Prerequisites
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
After the analytical store is enabled, based on the data retention needs of the
Analytical store relies on Azure Storage and offers the following protection against physical failure:
- * Single region Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) Azure Storage accounts.
- * If any geo-region replication is configured for the Azure Cosmos DB database account, analytical store is allocated in Zone-Redundant Storage (ZRS) Azure storage accounts.
+ * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts.
+ * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. Customers need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in ZRS.
## Backup
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
You can [provision and manage your Azure Cosmos DB account](how-to-manage-databa
| Maximum number of accounts per subscription | 50 by default. <sup>1</sup> | | Maximum number of regional failovers | 10/hour by default. <sup>1</sup> <sup>2</sup> |
-<sup>1</sup> You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md).
+<sup>1</sup> You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md) up to 1,000 max.
<sup>2</sup> Regional failovers only apply to single region writes accounts. Multi-region write accounts don't require or have any limits on changing the write region.
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md
Some settings in `ConnectionPolicy` have been renamed or replaced by `CosmosClie
|`MediaRequestTimeout`|Removed. Attachments are no longer supported.| |`SetCurrentLocation`|`CosmosClientOptions.ApplicationRegion` can be used to achieve the same effect.| |`PreferredLocations`|`CosmosClientOptions.ApplicationPreferredRegions` can be used to achieve the same effect.|
-|`UserAgentSuffix`| | `CosmosClientBuilder.ApplicationName` can be used to achieve the same effect.|
+|`UserAgentSuffix`|`CosmosClientBuilder.ApplicationName` can be used to achieve the same effect.|
### Indexing policy
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/manage-automation.md
Title: Manage Azure costs with automation
description: This article explains how you can manage Azure costs with automation. Previously updated : 04/05/2022 Last updated : 11/22/2022
You can configure budgets to start automated actions using Azure Action Groups.
## Data latency and rate limits
-We recommend that you call the APIs no more than once per day. Cost Management data is refreshed every four hours as new usage data is received from Azure resource providers. Calling more frequently doesn't provide more data. Instead, it creates increased load.
+We recommend that you call the APIs no more than once per day. Cost Management data is refreshed every four hours as new usage data is received from Azure resource providers. Calling more frequently doesn't provide more data. Instead, it creates increased load.
-<!-- For more information, see [Cost Management API latency and rate limits](../automate/api-latency-rate-limits.md) -->
+### Query API query processing units
+
+In addition to the existing rate limiting processes, the [Query API](/rest/api/cost-management/query) also limits processing based on the cost of API calls. The cost of an API call is expressed as query processing units (QPUs). QPU is a performance currency, like [Cosmos DB RUs](../../cosmos-db/request-units.md). They abstract system resources like CPU and memory.
+
+#### QPU calculation
+
+Currently, one QPU is deducted for one month of data queried from the allotted quotas. This logic might change without notice.
+
+#### QPU factors
+
+The following factor affects the number of QPUs consumed by an API request.
+
+- Date range, as the date range in the request increases, the number of QPUs consumed increases.
+
+Other QPU factors might be added without notice.
+
+#### QPU quotas
+
+The following quotas are configured per tenant. Requests are throttled when any of the following quotas are exhausted.
+
+- 12 QPU per 10 seconds
+- 60 QPU per 1 min
+- 600 QPU per 1 hour
+
+The quotas maybe be changed as needed and more quotas may be added.
+
+#### Response headers
+
+You can examine the response headers to track the number of QPUs consumed by an API request and number of QPUs remaining.
+
+`x-ms-ratelimit-microsoft.costmanagement-qpu-retry-after`
+
+Indicates the time to back-off in seconds. When a request is throttled with 429, back off for the time specified in this header before retrying the request.
+
+`x-ms-ratelimit-microsoft.costmanagement-qpu-consumed`
+
+QPUs consumed by an API call.
+
+`x-ms-ratelimit-microsoft.costmanagement-qpu-remaining`
+
+List of remaining quotas.
## Next steps
cost-management-billing Ea Portal Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-rest-apis.md
This article describes the REST APIs for use with your Azure enterprise enrollme
Microsoft Enterprise Azure customers can get usage and billing information through REST APIs. The role owner (Enterprise Administrator, Department Administrator, Account Owner) must enable access to the API by generating a key from the Azure EA portal. Then, anyone provided with the enrollment number and key can access the data through the API.
-### Available APIs
+## Available APIs
**Balance and Summary -** The [Balance and Summary API](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) provides a monthly summary of information about balances, new purchases, Azure Marketplace service charges, adjustments, and overage charges. For more information, see [Reporting APIs for Enterprise customers - Balance and Summary](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary).
Microsoft Enterprise Azure customers can get usage and billing information throu
**Billing Periods -** The [Billing Periods API](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods) returns a list of billing periods that have consumption data for an enrollment in reverse chronological order. Each period contains a property pointing to the API route for the four sets of data, BalanceSummary, UsageDetails, Marketplace Charges, and PriceSheet. For more information, see [Reporting APIs for Enterprise customers - Billing Periods](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods).
-### API key generation
+## Enable API data access
-Role owners can perform the following steps in the Azure EA portal. Navigate to **Reports** > **Download Usage** > **API Access Key**. Then they can:
+Role owners can perform the following steps in the Azure portal to enable API data access.
-- Generate and regenerate primary and secondary access keys.-- Revoke access keys.-- View start and end dates of access keys.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Search for Cost Management + Billing and then select it.
+3. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.
+4. In the left navigation menu, select **Usage + Charges**.
+5. Select **Manage API Access Keys** to open the Manage API Access Keys window.
+ :::image type="content" source="./media/ea-portal-rest-apis/manage-api-access-keys.png" alt-text="Screenshot showing the Manage API Access Keys option." lightbox="./media/ea-portal-rest-apis/manage-api-access-keys.png" :::
-### Generate or retrieve the API Key
+In the Manage API Access Keys window, you can perform the following tasks:
-1. Sign in as an enterprise administrator.
-2. Select **Reports** on the left navigation window and then select the **Download Usage** tab.
-3. Select **API Access Key**.
-4. Under **Enrollment Access Keys**, select **regenerate** to generate either a primary or secondary key.
-5. Select **Expand Key** to view the entire generated API access key.
-6. Select **Copy** to get the API access key for immediate use.
+- Generate and view primary and secondary access keys
+- View start and end dates for access keys
+- Disable access keys
+### Generate the primary or secondary API key
-If you want to give the API access keys to people who aren't enterprise administrators in your enrollment, perform the following steps:
+1. Sign in to the Azure portal as an enterprise administrator.
+2. Select **Cost Management + Billing**.
+3. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.
+4. In the navigation menu, select **Usage + Charges**.
+5. Select **Manage API Access Keys**.
+6. Select **Generate** to generate the key.
+ :::image type="content" source="./media/ea-portal-rest-apis/manage-api-access-keys-window.png" alt-text="Screenshot showing the Manage API Access Keys window." lightbox="./media/ea-portal-rest-apis/manage-api-access-keys-window.png" :::
+7. Select the **expand symbol** or select **Copy** to get the API access key for immediate use.
+ :::image type="content" source="./media/ea-portal-rest-apis/expand-symbol-copy.png" alt-text="Screenshot showing the expand symbol and Copy option." lightbox="./media/ea-portal-rest-apis/expand-symbol-copy.png" :::
-1. In the left navigation window, select **Manage**.
-2. Select the pencil symbol next to **DA view charges** (Department Administrator view charges).
-3. Select **Enable** and then select **Save**.
-4. Select the pencil symbol next to **AO view charges** (Account Owner view charges).
-5. Select **Enable** and then select **Save**.
+### Regenerate the primary or secondary API key
-![Screenshot showing DA and AO view charges enabled.](./media/ea-portal-rest-apis/create-ea-generate-or-retrieve-api-key-enable-ao-do-view.png)
+1. Sign in to the Azure portal as an enterprise administrator.
+2. Select **Cost Management + Billing**.
+3. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.
+4. In the navigation menu, select **Usage + Charges**.
+5. Select **Manage API Access Keys**.
+6. Select **Regenerate** to regenerate the key.
-The preceding steps give API access key holders with access to cost and pricing information in usage reports.
+### Revoke the primary or secondary API key
-### Pass keys in the API
+1. Sign in to the Azure portal as an enterprise administrator.
+2. Search for and select **Cost Management + Billing**.
+3. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.
+4. In the navigation menu, select **Usage + Charges**.
+5. Select **Manage API Access Keys**.
+6. Select **Revoke** to revoke the key.
+
+### Allow API access to non-administrators
+
+If you want to give the API access keys to people who aren't enterprise administrators in your enrollment, perform the following steps.
+
+The steps give API access to key holders so they can view cost and pricing information in usage reports.
+
+1. In the left navigation window, selectΓÇ»**Policies**.
+2. Select **On** under the DEPARTMENT ADMINS CAN VIEW CHARGES section and then select **Save**.
+3. Select **On** under the ACCOUNT OWNERS CAN VIEW CHARGES section and then select **Save**.
+ :::image type="content" source="./media/ea-portal-rest-apis/policies-view-charges.png" alt-text="Screenshot showing the Polices window where you change view charges options." lightbox="./media/ea-portal-rest-apis/policies-view-charges.png" :::
+
+## Pass keys in the API
Pass the API key for each call for authentication and authorization. Pass the following property to HTTP headers:
Pass the API key for each call for authentication and authorization. Pass the fo
| Authorization | Specify the value in this format: **bearer {API\_KEY}** Example: bearer \<APIKey\> |
-### Swagger
+## Swagger
A Swagger endpoint is available at [Enterprise Reporting v3 APIs](https://consumption.azure.com/swagger/ui/index)for the following APIs. Swagger helps inspect the API. Use Swagger to generate client SDKs using [AutoRest](https://github.com/Azure/AutoRest) or [Swagger CodeGen](https://swagger.io/swagger-codegen/). Data available after May 1, 2014 is available through the API.
-### API response codes
+## API response codes
When you're using an API, response status codes are shown. The following table describes them.
When you're using an API, response status codes are shown. The following table d
| 400 | Bad Request | Invalid parameters ΓÇô Date ranges, EA numbers etc. | | 500 | Server Error | Unexpected error processing request |
-### Usage and billing data update frequency
+## Usage and billing data update frequency
Usage and billing data files are updated every 24 hours for the current billing month. However, data latency can occur for up to three days. For example, if usage is incurred on Monday, data might not appear in the data file until Thursday.
-### Azure service catalog
+## Azure service catalog
You can download all Azure services in the Azure portal as part of the Price Sheet download. For more information about downloading your price sheet, see [Download pricing for an Enterprise Agreement](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
-### CSV data file details
+## CSV data file details
The following information describes the properties of API reports.
-#### Usage summary
+### Usage summary
JSON format is generated from the CSV report. As a result, the format is same as the summary CSV format. The column name is wielded, so you should deserialize into a data table when you consume the JSON summary data.
JSON format is generated from the CSV report. As a result, the format is same as
| Unit of Measure | UnitOfMeasure | UnitOfMeasure | Example values: Hours, GB, Events, Pushes, Unit, Unit Hours, MB, Daily Units | | ResourceGroup | ResourceGroup | ResourceGroup | |
-#### Azure Marketplace report
+### Azure Marketplace report
| CSV column name | JSON column name | JSON new column | | | | |
JSON format is generated from the CSV report. As a result, the format is same as
| Cost Center | CostCenters | CostCenter | | Resource Group | ResourceGroup | ResourceGroup |
-#### Price sheet
+### Price sheet
| CSV column name | JSON column name | Comment | | | | |
JSON format is generated from the CSV report. As a result, the format is same as
| Overage Unit Price | ConsumptionPrice | | | Currency Code | CurrencyCode | |
-### Common API issues
+## Common API issues
As you use Azure Enterprise REST APIs, you might encounter any of the following common issues.
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
Last updated 10/25/2021
-# Connect Data Factory to Microsoft Purview (Preview)
+# Connect Data Factory to Microsoft Purview
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
databox-online Azure Stack Edge Gpu 2202 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2202-release-notes.md
The 2202 release has the following features and enhancements:
- **Multi-Access Edge Computing (MEC) and Virtual Network Functions (VNF) improvements**: - In this release, VM create and delete for VNF create and delete were parallelized. This has significantly reduced the creation time for VNFs that contain multiple VMs. - The VHD ingestion job resource clean up was moved out of VNF create and delete. This reduced the VNF creation and deletion times.-- **Updates for Azure Arc and Edge container registry** - Azure Arc and Edge container registry versions were updated. For more information, see [About updates](azure-stack-edge-gpu-install-update.md#about-latest-update).
+- **Updates for Azure Arc and Edge container registry** - Azure Arc and Edge container registry versions were updated. For more information, see [About updates](azure-stack-edge-gpu-install-update.md#about-latest-updates).
- **Security fixes** - Starting this release, a pod security policy is set up on the Kubernetes cluster on your Azure Stack Edge device. If you are using root privileges in your containerized solution, you may experience some change in the behavior. No action is required on your part.
databox-online Azure Stack Edge Gpu 2207 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2207-release-notes.md
Previously updated : 11/09/2022 Last updated : 11/21/2022
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2207** release, which maps to software version number **2.2.2038.5916**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
+This article applies to the **Azure Stack Edge 2207** release, which maps to software version number **2.2.2039.84**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
## What's new
The 2207 release has the following features and enhancements:
- **Kubernetes version update** - This release contains a Kubernetes version update from 1.20.9 to v1.22.6.
-## Known issues in 2207 release
+## Known issues in this release
The following table provides a summary of known issues in this release.
databox-online Azure Stack Edge Gpu 2210 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2210-release-notes.md
The following release notes identify the critical open issues and the resolved i
The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
-This article applies to the **Azure Stack Edge 2210** release, which maps to software version **2.2.2111.1002**. This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2038.5916).
+This article applies to the **Azure Stack Edge 2210** release, which maps to software version **2.2.2111.1002**. This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2026.5318).
## What's new
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
This article describes the steps required to install update on your Azure Stack
The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version.
-## About latest update
+## About latest updates
The current update is Update 2210. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
databox-online Azure Stack Edge Move To Self Service Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-move-to-self-service-iot-edge.md
+
+ Title: Move workloads from Azure Stack Edge's managed IoT Edge to an IoT Edge solution on a Linux VM
+description: Describes steps to move workloads from Azure Stack Edge to a self-service IoT Edge solution on a Linux VM.
++++++ Last updated : 11/21/2022+
+#Customer intent: As an IT admin, I need to understand how to move an IoT Edge workload from native/managed Azure Stack Edge to a self-service IoT Edge solution on a Linux VM, so that I can efficiently manage my VMs.
++
+# Move workloads from Azure Stack Edge's managed IoT Edge to an IoT Edge solution on a Linux VM
++
+This article provides steps to move your managed IoT Edge workloads to IoT Edge running on a Linux VM on Azure Stack Edge. This article will use IoT Edge on an Ubuntu VM as an example. You can use other [supported Linux distributions](../iot-edge/support.md#linux-containers).
+
+> [!NOTE]
+> We recommend that you deploy the latest IoT Edge version in a Linux VM to run IoT Edge workloads on Azure Stack Edge. For more information about earlier versions of IoT Edge, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137).
+
+## Workflow to deploy onto an IoT Edge VM
+
+The high-level workflow is as follows:
+
+1. Deploy a Linux VM and install IoT Edge runtime on it using symmetric keys.
+
+1. Connect the newly deployed IoT Edge runtime to the newly created IoT Edge device from the previous step.
+
+1. From IoT Hub, redeploy IoT Edge modules onto the new IoT Edge device.
+
+1. Once your solution is running on IoT Edge on a Linux VM, you can remove the modules running on the native or managed IoT Edge on Azure Stack Edge. From IoT Hub, delete the IoT Edge device to remove the modules running on Azure Stack Edge.
+
+1. Optional: If you aren't using the Kubernetes cluster on Azure Stack Edge, you can delete the whole Kubernetes cluster.
+
+1. Optional: If you have leaf IoT devices communicating with IoT Edge on Kubernetes, this step documents how to make changes to communicate with the IoT Edge on a VM.
+
+## Step 1. Create an IoT Edge device on Linux using symmetric keys
+
+Create and provision an IoT Edge device on Linux using symmetric keys. For detailed steps, see [Create and provision an IoT Edge device on Linux using symmetric keys](../iot-edge/how-to-provision-single-device-linux-symmetric.md).
+
+## Step 2. Install and provision an IoT Edge on a Linux VM
+
+Follow the steps at [Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For other supported Linux distributions, see [Linux containers](../iot-edge/support.md).
+
+## Step 3. Deploy Azure IoT Edge modules from the Azure portal
+
+Deploy Azure IoT modules to the new IoT Edge. For detailed steps, see [Deploy Azure IoT Edge modules from the Azure portal](../iot-edge/how-to-deploy-modules-portal.md).
+
+ With the latest IoT Edge version, you can deploy your IoT Edge modules at scale. For more information, see [Deploy IoT Edge modules at scale using the Azure portal](../iot-edge/how-to-deploy-at-scale.md).
+
+## Step 4. Remove Azure IoT Edge modules
+
+Once your modules are successfully running on the new IoT Edge instance running on a VM, you can delete the whole IoT Edge device associated with that IoT Edge instance. From IoT Hub on the Azure portal, delete the IoT Edge device connected to the IoT Edge, as shown below.
+
+![Screenshot showing delete IoT Edge device from IoT Edge instance in Azure portal UI.](media/azure-stack-edge-move-to-self-service-iot-edge/azure-stack-edge-delete-iot-edge-device.png)
+
+## Step 5. Optional: Remove the IoT Edge service
+
+If you aren't using the Kubernetes cluster on Azure Stack Edge, use the following steps to [remove the IoT Edge service](azure-stack-edge-gpu-manage-compute.md#remove-iot-edge-service). This action will remove modules running on the IoT Edge device, the IoT Edge runtime, and the Kubernetes cluster that hosts the IoT Edge runtime.
+
+From the Azure Stack Edge resource on Azure portal, under the Azure IoT Edge service, there's a **Remove** button to remove the Kubernetes cluster.
+
+> [!IMPORTANT]
+> Once the Kubernetes cluster is removed, there is no way to recover information from the Kubernetes cluster, whether it's IoT Edge-related or not.
+
+## Step 6. Optional: Configure an IoT Edge device as a transparent gateway
+
+If your IoT Edge device on Azure Stack Edge was configured as a gateway for downstream IoT devices, you must configure the IoT Edge running on the Linux VM as a transparent gateway. For more information, see [Configure and IoT Edge device as a transparent gateway](../iot-edge/how-to-create-transparent-gateway.md).
+
+For more information about configuring downstream IoT devices to connect to a newly deployed IoT Edge running on a Linux VM, see [Connect a downstream device to an Azure IoT Edge gateway](../iot-edge/how-to-connect-downstream-device.md).
+
+## Next steps
+
+[Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md)
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
To view the event schemas of the exported data types, visit the [Log Analytics t
## Export data to an Azure Event hub or Log Analytics workspace in another tenant
-You can export data to an Azure Event hub or Log Analytics workspace in a different tenant, without using [Azure Lighthouse](/azure/lighthouse/overview.md). When collecting data into a tenant, you can analyze the data from one central location.
+You can export data to an Azure Event hub or Log Analytics workspace in a different tenant, without using [Azure Lighthouse](../lighthouse/overview.md). When collecting data into a tenant, you can analyze the data from one central location.
To export data to an Azure Event hub or Log Analytics workspace in a different tenant:
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Learn more about:
- [Vulnerability assessment for Azure Container Registry (ACR)](defender-for-containers-vulnerability-assessment-azure.md) - [Vulnerability assessment for Amazon AWS Elastic Container Registry (ECR)](defender-for-containers-vulnerability-assessment-elastic.md)
-### View vulnerabilities for running images in Azure Container Registry (ACR)
-
-Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
-
-To provide findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the Defender agent installed on your AKS clusters. Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
--
-Learn more about [viewing vulnerabilities for running images in (ACR)](defender-for-containers-vulnerability-assessment-azure.md).
- ## Run-time protection for Kubernetes nodes and clusters Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
To create a rule:
:::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule."::: 1. To view or delete the rule, select the ellipsis menu ("...").
+## View vulnerabilities for images running on your AKS clusters
+
+Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
+
+To provide findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the Defender agent installed on your AKS clusters. Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
+++ ## FAQ ### How does Defender for Containers scan an image?
defender-for-cloud Defender For Databases Enable Cosmos Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-enable-cosmos-protections.md
You can enable Microsoft Defender for Cloud on a specific Azure Cosmos DB accoun
### [ARM template](#tab/arm-template)
-Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
+Use an Azure Resource Manager template to deploy an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled. For more information, see [Create an Azure Cosmos DB account with Microsoft Defender for Azure Cosmos DB enabled](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/microsoft-defender-cosmosdb-create-account).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in August include:
- [Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers](#vulnerabilities-for-running-images-are-now-visible-with-defender-for-containers-on-your-windows-containers) - [Azure Monitor Agent integration now in preview](#azure-monitor-agent-integration-now-in-preview) - [Deprecated VM alerts regarding suspicious activity related to a Kubernetes cluster](#deprecated-vm-alerts-regarding-suspicious-activity-related-to-a-kubernetes-cluster)+ ### Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers Defender for Containers now shows vulnerabilities for running Windows containers. When vulnerabilities are detected, Defender for Cloud generates the following security recommendation listing the detected issues: [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false).
-Learn more about [viewing vulnerabilities for running images](defender-for-containers-introduction.md#view-vulnerabilities-for-running-images-in-azure-container-registry-acr).
+Learn more about [viewing vulnerabilities for running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters).
### Azure Monitor Agent integration now in preview
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Several alerts are disabled by default, as indicated by asterisks (*) in the tab
If you disable alerts that are referenced in other places, such as alert forwarding rules, make sure to update those references as needed.
-See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-microsoft-defender-for-iot) for detailed information about changes made to alerts.
- ## Supported alert types | Alert type | Description |
defender-for-iot Hpe Proliant Dl20 Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-legacy.md
+
+ Title: HPE ProLiant DL20 for OT monitoring in enterprise deployments - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 appliance when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated : 10/30/2022+++
+# HPE ProLiant DL20 Gen10
+
+This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors in an enterprise deployment.
+
+Legacy appliances are certified but aren't currently offered as pre-configured appliances.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | E1800 |
+|**Performance** | Max bandwidth: 1 Gbp/s <br>Max devices: 10,000 |
+|**Physical specifications** | Mounting: 1U <br> Ports: 8x RJ45 or 6x SFP (OPT)|
+|**Status** | Supported, not available pre-configured |
+
+The following image shows a sample of the HPE ProLiant DL20 front panel:
++
+The following image shows a sample of the HPE ProLiant DL20 back panel:
++
+## Specifications
+
+|Component |Specifications|
+|||
+|Chassis |1U rack server |
+|Dimensions |Four 3.5" chassis: 4.29 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in |
+|Weight | Max 7.9 kg / 17.41 lb |
+
+## DL20 Gen10 BOM
+
+| Quantity | PN| Description: high end |
+|--|--|--|
+|1| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server |
+|1| P17104-L21 | HPE DL20 Gen10 E-2234 FIO Kit |
+|2| 879507-B21 | HPE 16-GB 2Rx8 PC4-2666V-E STND Kit |
+|3| 655710-B21 | HPE 1-TB SATA 7.2 K SFF SC DS HDD |
+|1| P06667-B21 | HPE DL20 Gen10 x8x16 FLOM Riser Kit |
+|1| 665240-B21 | HPE Ethernet 1-Gb 4-port 366FLR Adapter |
+|1| 782961-B21 | HPE 12-W Smart Storage Battery |
+|1| 869081-B21 | HPE Smart Array P408i-a SR G10 LH Controller |
+|2| 865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit |
+|1| 512485-B21 | HPE iLO Adv 1-Server License 1 Year Support |
+|1| P06722-B21 | HPE DL20 Gen10 RPS Enablement FIO Kit |
+|1| 775612-B21 | HPE 1U Short Friction Rail Kit |
+
+## Port expansion
+
+Optional modules for port expansion include:
+
+|Location |Type|Specifications|
+|-- | --| |
+| PCI Slot 1 (Low profile)| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
+| PCI Slot 1 (Low profile) | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| PCI Slot 2 (High profile)| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
+| PCI Slot 2 (High profile)|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
+| PCI Slot 2 (High profile)|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
+| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
+| SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
+
+## HPE ProLiant DL20 Gen10 installation
+
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 appliance.
+
+Installation includes:
+
+- Enabling remote access and updating the default administrator password
+- Configuring iLO port on network port 1
+- Configuring BIOS and RAID settings
+- Installing Defender for IoT software
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Enable remote access and update the password
+
+Use the following procedure to set up network options and update the default password.
+
+**To enable, and update the password**:
+
+1. Connect a screen and a keyboard to the HPE appliance, turn on the appliance, and press **F9**.
+
+ :::image type="content" source="../media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
+
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+
+ :::image type="content" source="../media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
+
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Set **Enable DHCP** to **Off**.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
+
+1. Select **F10: Save**.
+
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
+
+1. Change the default password and select **F10: Save**.
+
+### Configure the HPE BIOS
+
+This procedure describes how to update the HPE BIOS configuration for your OT deployment.
+
+**To configure the HPE BIOS**:
+
+1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+
+1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+
+1. Select **Esc** twice to close the **System Configuration** form.
+
+1. Select **Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+
+1. In the **Create Array** form, select all four disk options, and on the next page select **RAID10**.
+
+> [!NOTE]
+> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED).
+>
+
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10
+
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install Defender for IoT software**:
+
+1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the software you downloaded from the Azure portal.
+
+1. Start the appliance.
+
+1. Continue with the generic procedure for installing Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
Title: HPE ProLiant DL20/DL20 Plus for OT monitoring in enterprise deployments- Microsoft Defender for IoT
-description: Learn about the HPE ProLiant DL20/DL20 Plus appliance when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
+ Title: HPE ProLiant DL20 Gen10 Plus for OT monitoring in enterprise deployments - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 Gen10 Plus appliance when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments.
Last updated 04/24/2022
+# HPE ProLiant DL20 Gen10 Plus (4SFF)
-# HPE ProLiant DL20 Gen10/DL20 Gen10 Plus
-
-This article describes the **HPE ProLiant DL20 Gen10** or **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors in an enterprise deployment.
+This article describes the **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors in an enterprise deployment.
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises management console.
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises managemen
|**Hardware profile** | E1800 | |**Performance** | Max bandwidth: 1 Gbp/s <br>Max devices: 10,000 | |**Physical specifications** | Mounting: 1U <br> Ports: 8x RJ45 or 6x SFP (OPT)|
-|**Status** | Supported, Available preconfigured |
+|**Status** | Supported, available pre-configured |
The following image shows a sample of the HPE ProLiant DL20 front panel:
The following image shows a sample of the HPE ProLiant DL20 back panel:
:::image type="content" source="../media/tutorial-install-components/hpe-proliant-dl20-back-panel-v2.png" alt-text="Photo of the back panel of the HPE ProLiant DL20." border="false":::
-### Specifications
+## Specifications
+
+|Component|Technical specifications|
+|-|-|
+|Chassis|1U rack server|
+|Physical Characteristics | HPE DL20 Gen10+ NHP 4SFF CTO Server |
+|Processor| Intel Xeon E-2334 <br> 3.4 GHz 4C 65 W|
+|Chipset|Intel C256 |
+|Memory|2x 16-GB Dual Rank x8 DDR4-3200|
+|Storage|4x 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 10 |
+|Network controller|On-board: 2x 1 Gb|
+|External| 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter|
+|On-board| iLO Port Card 1 Gb|
+|Management|HPE iLO Advanced|
+|Device access| Front: One USB 3.0 1 x USB iLO Service Port<br> Rear: Two USBs 3.0|
+|Internal| One USB 3.0|
+|Power|2x Hot Plug Power Supply 290 W|
+|Rack support|HPE 1U Short Friction Rail Kit|
+
+## DL20 Gen10 Plus (4SFF) - Bill of Materials
-|Component |Specifications|
-|||
-|Chassis |1U rack server |
-|Dimensions |Four 3.5" chassis: 4.29 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in |
-|Weight | Max 7.9 kg / 17.41 lb |
-
-**DL20 Gen10 BOM**
-
-| Quantity | PN| Description: high end |
-|--|--|--|
-|1| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server |
-|1| P17104-L21 | HPE DL20 Gen10 E-2234 FIO Kit |
-|2| 879507-B21 | HPE 16-GB 2Rx8 PC4-2666V-E STND Kit |
-|3| 655710-B21 | HPE 1-TB SATA 7.2 K SFF SC DS HDD |
-|1| P06667-B21 | HPE DL20 Gen10 x8x16 FLOM Riser Kit |
-|1| 665240-B21 | HPE Ethernet 1-Gb 4-port 366FLR Adapter |
-|1| 782961-B21 | HPE 12-W Smart Storage Battery |
-|1| 869081-B21 | HPE Smart Array P408i-a SR G10 LH Controller |
-|2| 865408-B21 | HPE 500-W FS Plat Hot Plug LH Power Supply Kit |
-|1| 512485-B21 | HPE iLO Adv 1-Server License 1 Year Support |
-|1| P06722-B21 | HPE DL20 Gen10 RPS Enablement FIO Kit |
-|1| 775612-B21 | HPE 1U Short Friction Rail Kit |
-
-**DL20 Gen10 Plus BOM**:
+|Quantity|PN|Description|
+|-||-|
+|1| P44111-B21 | HPE DL20 Gen10+ 4SFF CTO Server|
+|1| P45252-B21 | Intel Xeon E-2334 FIO CPU for HPE|
+|4| P28610-B21 | HPE 1TB SATA 7.2K SFF BC HDD|
+|2| P43019-B21 | HPE 16GB 1Rx8 PC4-3200AA-E Standard Kit|
+|1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10)|
+|1| P21106-B21 | INT I350 1GbE 4p BASE-T Adapter|
+|1| P45948-B21 | HPE DL20 Gen10+ RPS FIO Enable Kit|
+|2| 865408-B21 | HPE 500W FS Plat Hot Plug LH Power Supply Kit|
+|1| 775612-B21 | HPE 1U Short Friction Rail Kit|
+|1| 512485-B21 | HPE iLO Adv 1 Server License 1 year support|
+|1| P46114-B21 | HPE DL20 Gen10+ 2x8 LP FIO Riser Kit|
+
+## Optional Storage Arrays
|Quantity|PN|Description| |-||-|
-|1| P44111-B21| HPE DL20 Gen10+ 4SFF CTO Server|
-|1| P45252-B21| Intel Xeon E-2334 FIO CPU for HPE|
-|1| 869081-B21| HPE Smart Array P408i-a SR G10 LH Controller|
-|1| 782961-B21| HPE 12W Smart Storage Battery|
-|1| P45948-B21| HPE DL20 Gen10+ RPS FIO Enable Kit|
-|2| 865408-B21| HPE 500W FS Plat Hot Plug LH Power Supply Kit|
-|1| 775612-B21| HPE 1U Short Friction Rail Kit|
-|1| 512485-B21| HPE iLO Adv 1 Server License 1 year support|
-|1| P46114-B21| HPE DL20 Gen10+ 2x8 LP FIO Riser Kit|
-|1| P21106-B21| INT I350 1GbE 4p BASE-T Adapter|
-|3| P28610-B21| HPE 1TB SATA 7.2K SFF BC HDD|
-|2| P43019-B21| HPE 16GB 1Rx8 PC4-3200AA-E Standard Kit|
+|1| P26325-B21 | Broadcom MegaRAID MR216i-a x16 Lanes without Cache NVMe/SAS 12G Controller (RAID5)<br><br>**Note**: This RAID controller occupies the PCIe expansion slot and does not allow expansion of networking port expansion |
## Port expansion Optional modules for port expansion include: |Location |Type|Specifications|
-|-- | --| |
-| PCI Slot 1 (Low profile)| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI |
-| PCI Slot 1 (Low profile) | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
-| PCI Slot 2 (High profile)| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI|
-| PCI Slot 2 (High profile)|DP F/O NIC| 727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter|
-| PCI Slot 2 (High profile)|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI|
+|--|--||
+| PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1Gb 4-port BASE-T Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver| | SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
-## HPE ProLiant DL20 Gen10 / HPE ProLiant DL20 Gen10 Plus installation
+## HPE ProLiant DL20 Gen10 Plus installation
-This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus appliance.
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus appliance.
Installation includes: - Enabling remote access and updating the default administrator password - Configuring iLO port on network port 1-- Configuring BIOS and RAID settings
+- Configuring BIOS and RAID10 settings
- Installing Defender for IoT software > [!NOTE]
This procedure describes how to update the HPE BIOS configuration for your OT de
1. Select **Esc** twice to close the **System Configuration** form.
-1. Select **Embedded RAID 1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+1. Select **Embedded RAID 1: HPE Smart Array E208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
-1. In the **Create Array** form, select all the options. Three options are available for the **Enterprise** appliance.
+1. In the **Create Array** form, select all four disk options, and on the next page select **RAID10**.
> [!NOTE]
-> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED).
+> For **Data-at-Rest** encryption, see HPE guidance for activating RAID SR Secure Encryption or using Self-Encrypting-Drives (SED).
>
-### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus
-This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus.
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus.
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
Title: HPE ProLiant DL20 Gen10/DL20 Gen10 Plus (NHP 2LFF) for OT monitoring in SMB deployments- Microsoft Defender for IoT
-description: Learn about the HPE ProLiant DL20 Gen10/DL20 Gen10 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
+ Title: HPE ProLiant DL20 Gen10 Plus (NHP 2LFF) for OT monitoring in SMB deployments - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 Gen10 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
Last updated 04/24/2022
-# HPE ProLiant DL20 Gen10/DL20 Gen10 Plus (NHP 2LFF) for SMB deployments
+# HPE ProLiant DL20 Gen10 Plus (NHP 2LFF)
-This article describes the **HPE ProLiant DL20 Gen10** or **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors in an SBM deployment.
+This article describes the **HPE ProLiant DL20 Gen10 Plus** appliance for OT sensors monitoring production lines.
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises management console.
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises managemen
|**Hardware profile** | L500| |**Performance** | Max bandwidth: 200Mbp/s <br>Max devices: 1,000 | |**Physical specifications** | Mounting: 1U<br>Ports: 4x RJ45|
-|**Status** | Supported; Available as pre-configured |
+|**Status** | Supported; available pre-configured |
The following image shows a sample of the HPE ProLiant DL20 Gen10 front panel:
The following image shows a sample of the HPE ProLiant DL20 Gen10 back panel:
|Component|Technical specifications| |-|-| |Chassis|1U rack server|
-|Dimensions |4.32 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in|
-|Weight|7.88 kg / 17.37 lb|
-|Processor| Intel Xeon E-2224 <br> 3.4 GHz 4C 71 W|
-|Chipset|Intel C242|
-|Memory|One 8-GB Dual Rank x8 DDR4-2666|
-|Storage|Two 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 1 with Smart Array P208i-a|
-|Network controller|On-board: Two 1 Gb|
-|On-board| iLO Port Card 1 Gb|
+|Physical Characteristics | HPE DL20 Gen10+ NHP 2LFF CTO Server |
+|Processor| Intel Xeon E-2334 <br> 3.4 GHz 4C 65 W|
+|Chipset|Intel C256|
+|Memory|1x 8-GB Dual Rank x8 DDR4-3200|
+|Storage|4x 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 1 |
+|Network controller|On-board: 2x 1 Gb|
|External| 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter|
+|On-board| iLO Port Card 1 Gb|
|Management|HPE iLO Advanced| |Device access| Front: One USB 3.0 1 x USB iLO Service Port<br> Rear: Two USBs 3.0| |Internal| One USB 3.0| |Power|Hot Plug Power Supply 290 W| |Rack support|HPE 1U Short Friction Rail Kit|
-## Appliance BOM
+## DL20 Gen10 Plus (NHP 2LFF) - Bill of Materials
+
+|Quantity|PN|Description|
+|-||-|
+|1| P44111-B21 | HPE DL20 Gen10+ NHP 2LFF CTO Server|
+|1| P45252-B21 | Intel Xeon E-2334 FIO CPU for HPE|
+|2| P28610-B21 | HPE 1TB SATA 7.2K SFF BC HDD|
+|1| P43016-B21 | HPE 8GB 1Rx8 PC4-3200AA-E Standard Kit|
+|1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10)|
+|1| P21106-B21 | INT I350 1GbE 4p BASE-T Adapter|
+|1| P45948-B21 | HPE DL20 Gen10+ RPS FIO Enable Kit|
+|1| 865408-B21 | HPE 500W FS Plat Hot Plug LH Power Supply Kit|
+|1| 775612-B21 | HPE 1U Short Friction Rail Kit|
+|1| 512485-B21 | HPE iLO Adv 1 Server License 1 year support|
+|1| P46114-B21 | HPE DL20 Gen10+ 2x8 LP FIO Riser Kit|
+
+## Optional Storage Arrays
-|PN|Description|Quantity|
-|:-|:-|:-|
-|P06961-B21|HPE DL20 Gen10 NHP 2LFF CTO Server|1|
-|P17102-L21|HPE DL20 Gen10 E-2224 FIO Kit|1|
-|879505-B21|HPE 8-GB 1Rx8 PC4-2666V-E Standard Kit|1|
-|801882-B21|HPE 1-TB SATA 7.2 K LFF RW HDD|2|
-|P06667-B21|HPE DL20 Gen10 x8x16 FLOM Riser Kit|1|
-|665240-B21|HPE Ethernet 1-Gb 4-port 366FLR Adapter|1|
-|869079-B21|HPE Smart Array E208i-a SR G10 LH Controller|1|
-|P21649-B21|HPE DL20 Gen10 Plat 290 W FIO PSU Kit|1|
-|P06683-B21|HPE DL20 Gen10 M.2 SATA/LFF AROC Cable Kit|1|
-|512485-B21|HPE iLO Adv 1-Server License 1 Year Support|1|
-|775612-B21|HPE 1U Short Friction Rail Kit|1|
+|Quantity|PN|Description|
+|-||-|
+|1| P26325-B21 | Broadcom MegaRAID MR216i-a x16 Lanes without Cache NVMe/SAS 12G Controller (RAID5)<br><br>**Note**: This RAID controller occupies the PCIe expansion slot and does not allow expansion of networking port expansion |
-## HPE ProLiant DL20 Gen10/HPE ProLiant DL20 Gen10 Plus installation
+## Port expansion
-This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus appliance.
+Optional modules for port expansion include:
+
+|Location |Type|Specifications|
+|--|--||
+| PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1Gb 4-port BASE-T Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
+| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
+| SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
+
+## HPE ProLiant HPE ProLiant DL20 Gen10 Plus installation
+
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus appliance.
Installation includes: - Enabling remote access and updating the default administrator password - Configuring iLO port on network port 1-- Configuring BIOS and RAID settings
+- Configuring BIOS and RAID1 settings
- Installing Defender for IoT software > [!NOTE]
-> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
> - ### Enable remote access and update the password Use the following procedure to set up network options and update the default password.
This procedure describes how to update the HPE BIOS configuration for your OT de
1. Select **Esc** twice to close the **System Configuration** form.
-1. Select **Embedded RAID 1: HPE Smart Array P208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+1. Select **Embedded RAID 1: HPE Smart Array E208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
1. Select **Proceed to Next Form**.
-1. In the **Set RAID Level** form, set the level to **RAID 5** for enterprise deployments and **RAID 1** for SMB deployments.
+1. In the **Set RAID Level** form, set the level to **RAID 1**.
1. Select **Proceed to Next Form**.
This procedure describes how to update the HPE BIOS configuration for your OT de
:::image type="content" source="../media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window.":::
-### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus
-This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 or HPE ProLiant DL20 Gen10 Plus.
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus.
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
defender-for-iot Hpe Proliant Dl20 Smb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-smb-legacy.md
+
+ Title: HPE ProLiant DL20 Gen10 (NHP 2LFF) for OT monitoring in SMB deployments- Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 Gen10 appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
Last updated : 10/30/2022+++
+# HPE ProLiant DL20 Gen10 (NHP 2LFF)
+
+This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors for monitoring production lines.
+
+Legacy appliances are certified but are not currently offered as pre-configured appliances.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | L500|
+|**Performance** | Max bandwidth: 200Mbp/s <br>Max devices: 1,000 |
+|**Physical specifications** | Mounting: 1U<br>Ports: 4x RJ45|
+|**Status** | Supported, not available pre-configured |
+
+The following image shows a sample of the HPE ProLiant DL20 Gen10 front panel:
++
+The following image shows a sample of the HPE ProLiant DL20 Gen10 back panel:
++
+## Specifications
+
+|Component|Technical specifications|
+|-|-|
+|Chassis|1U rack server|
+|Dimensions |4.32 x 43.46 x 38.22 cm / 1.70 x 17.11 x 15.05 in|
+|Weight|7.88 kg / 17.37 lb|
+|Processor| Intel Xeon E-2224 <br> 3.4 GHz 4C 71 W|
+|Chipset|Intel C242|
+|Memory|One 8-GB Dual Rank x8 DDR4-2666|
+|Storage|Two 1-TB SATA 6G Midline 7.2 K SFF (2.5 in) ΓÇô RAID 1 with Smart Array P208i-a|
+|Network controller|On-board: Two 1 Gb|
+|On-board| iLO Port Card 1 Gb|
+|External| 1 x HPE Ethernet 1-Gb 4-port 366FLR Adapter|
+|Management|HPE iLO Advanced|
+|Device access| Front: One USB 3.0 1 x USB iLO Service Port<br> Rear: Two USBs 3.0|
+|Internal| One USB 3.0|
+|Power|Hot Plug Power Supply 290 W|
+|Rack support|HPE 1U Short Friction Rail Kit|
+
+## Appliance BOM
+
+|PN|Description|Quantity|
+|:-|:-|:-|
+|P06961-B21|HPE DL20 Gen10 NHP 2LFF CTO Server|1|
+|P17102-L21|HPE DL20 Gen10 E-2224 FIO Kit|1|
+|879505-B21|HPE 8-GB 1Rx8 PC4-2666V-E Standard Kit|1|
+|801882-B21|HPE 1-TB SATA 7.2 K LFF RW HDD|2|
+|P06667-B21|HPE DL20 Gen10 x8x16 FLOM Riser Kit|1|
+|665240-B21|HPE Ethernet 1-Gb 4-port 366FLR Adapter|1|
+|869079-B21|HPE Smart Array E208i-a SR G10 LH Controller|1|
+|P21649-B21|HPE DL20 Gen10 Plat 290 W FIO PSU Kit|1|
+|P06683-B21|HPE DL20 Gen10 M.2 SATA/LFF AROC Cable Kit|1|
+|512485-B21|HPE iLO Adv 1-Server License 1 Year Support|1|
+|775612-B21|HPE 1U Short Friction Rail Kit|1|
+
+## HPE ProLiant DL20 Gen10 installation
+
+This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 appliance.
+
+Installation includes:
+
+- Enabling remote access and updating the default administrator password
+- Configuring iLO port on network port 1
+- Configuring BIOS and RAID settings
+- Installing Defender for IoT software
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Enable remote access and update the password
+
+Use the following procedure to set up network options and update the default password.
+
+**To enable and update the password**:
+
+1. Connect a screen, and a keyboard to the HPE appliance, turn on the appliance, and press **F9**.
+
+ :::image type="content" source="../media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
+
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+
+ :::image type="content" source="../media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
+
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Set **Enable DHCP** to **Off**.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
+
+1. Select **F10: Save**.
+
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
+
+1. Change the default password and select **F10: Save**.
+
+### Configure the HPE BIOS
+
+This procedure describes how to update the HPE BIOS configuration for your OT deployment.
+
+**To configure the HPE BIOS**:
+
+1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+
+1. Change **Boot Mode** to **Legacy BIOS Mode**, and then select **F10: Save**.
+
+1. Select **Esc** twice to close the **System Configuration** form.
+
+1. Select **Embedded RAID 1: HPE Smart Array P208i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+
+1. Select **Proceed to Next Form**.
+
+1. In the **Set RAID Level** form, set the level to **RAID 5** for enterprise deployments and **RAID 1** for SMB deployments.
+
+1. Select **Proceed to Next Form**.
+
+1. In the **Logical Drive Label** form, enter **Logical Drive 1**.
+
+1. Select **Submit Changes**.
+
+1. In the **Submit** form, select **Back to Main Menu**.
+
+1. Select **F10: Save** and then press **Esc** twice.
+
+1. In the **System Utilities** window, select **One-Time Boot Menu**.
+
+1. In the **One-Time Boot Menu** form, select **Legacy BIOS One-Time Boot Menu**.
+
+1. The **Booting in Legacy** and **Boot Override** windows appear. Choose a boot override option; for example, to a CD-ROM, USB, HDD, or UEFI shell.
+
+ :::image type="content" source="../media/tutorial-install-components/boot-override-window-one-v2.png" alt-text="Screenshot that shows the first Boot Override window.":::
+
+ :::image type="content" source="../media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window.":::
+
+### Install Defender for IoT software on the HPE ProLiant DL20 Gen10
+
+This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10.
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install Defender for IoT software**:
+
+1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the software you downloaded from the Azure portal.
+
+1. Start the appliance.
+
+1. Continue by installing your Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)
+- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)
+- [Install software](../how-to-install-software.md)
defender-for-iot Custom Columns Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/custom-columns-sample-script.md
Title: Sample automation script for custom columns on on-premises management consoles - Microsoft Defender for IoT
-description: Learn how to view and manage OT devices (assets) from the Device inventory page on an on-premises management console.
+description: Use a sample script when adding custom columns to your on-premises management console Device inventory page.
Last updated 07/12/2022
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
The following alert groups are automatically defined:
- Bandwidth anomalies - Internet access - Suspicion of malware-- Buffer overflow
+- Buffer overflow
- Operation failures - Suspicion of malicious activity - Command failures
Alert groups are predefined. For details about alerts associated with alert grou
## Customize alert rules
-Add custom alert rule to pinpoint specific activity needed for your organization such as for particular protocols, source or destination addresses, or a combination of parameters.
+Add custom alert rules to pinpoint specific activity needed for your organization. The rules can refer, among others, to particular protocols, source or destination addresses, or a combination of parameters.
+For example, for an environment running MODBUS, you can define a rule to detect any written commands to a memory register on a specific IP address and ethernet destination. Another example would be setting an alert about any access to a particular IP address.
-For example, you might want to define an alert for an environment running MODBUS to detect any written commands to a memory register on a specific IP address and ethernet destination. Another example would be an alert for any access to a particular IP address.
-
-Use custom alert rule actions to instruct Defender for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
+Specify in the custom alert rule what action Defender for IT should take when the alert is triggered. For example, the action can be allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages show that the alert was generated from a custom alert rule.
**To create a custom alert rule**:
Use custom alert rule actions to instruct Defender for IT to take specific actio
1. In the **Create custom alert rule** pane that shows on the right, define the following fields:
- - **Alert name**. Enter a meaningful name for the alert.
-
- - **Alert protocol**. Select the protocol you want to detect. In specific cases, select one of the following protocols:
-
- - For a database data or structure manipulation event, select **TNS** or **TDS**
- - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type
- - For a package download event, select **HTTP**
- - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type.
-
- To create rules that monitor for specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`.
-
- - **Message**. Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message.
+ |Name |Description |
+ |||
+ |**Alert name** | Enter a meaningful name for the alert. |
+ |**Alert protocol** | Select the protocol you want to detect. <br> In specific cases, select one of the following protocols: <br> <br> - For a database data or structure manipulation event, select **TNS** or **TDS**. <br> - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type. <br> - For a package download event, select **HTTP**. <br> - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type. <br> <br> To create rules that track specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`. |
+ |**Message** | Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. <br> <br> For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message. |
+ |**Direction** | Enter a source and/or destination IP address where you want to detect traffic. |
+ |**Conditions** | Define one or more conditions that must be met to trigger the alert. Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format. <br><br> Note that the **+** sign is enabled only after selecting an **Alert protocol** from above. <br> You must add at least one condition in order to create a custom alert rule. |
+ |**Detected** | Define a date and/or time range for the traffic you want to detect. You can customize the days and time range to fit with maintenance hours or set working hours. |
+ |**Action** | Define an action you want Defender for IoT to take automatically when the alert is triggered. |
- - **Direction**. Enter a source and/or destination IP address where you want to detect traffic.
+ For example:
+
+ :::image type="content" source="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png" alt-text="Screenshot of the Create custom alert rule pane for creating custom alert rules." lightbox="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png":::
- - **Conditions**. Define one or more conditions that must be met to trigger the alert. Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format.
+1. Select **Save** when you're done to save the rule.
- - **Detected**. Define a date and/or time range for the traffic you want to detect.
- - **Action**. Define an action you want Defender for IoT to take automatically when the alert is triggered.
+### Edit a custom alert rule
To edit a custom alert rule, select the rule and then select the options (**...**) menu > **Edit**. Modify the alert rule as needed and save your changes. Edits made to custom alert rules, such as changing a severity level or protocol, are tracked in the **Event timeline** page on the sensor console. For more information, see [Track sensor activity](how-to-track-sensor-activity.md).
-**To enable or disable custom alert rules**
+### Disable, enable, or delete custom alert rules
-You can disable custom alert rules to prevent them from running without deleting them altogether.
+Disable custom alert rules to prevent them from running without deleting them altogether.
In the **Custom alert rules** page, select one or more rules, and then select **Enable**, **Disable**, or **Delete** in the toolbar as needed.
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
This article provides a catalog of the pre-configured appliances available for M
Use the links in the tables below to jump to articles with more details about each appliance.
-Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20MD4IoT%20pre-configured%20appliances).
+Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D).
For more information, see [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors).
-## Advantages of preconfigured appliances
+## Advantages of pre-configured appliances
Pre-configured physical appliances have been validated for Defender for IoT OT system monitoring, and have the following advantages over installing your own software:
Pre-configured physical appliances have been validated for Defender for IoT OT s
## Appliances for OT network sensors
-You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20MD4IoT%20pre-configured%20appliances). any of the following preconfigured appliances for monitoring your OT networks:
+You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D) any of the following pre-configured appliances for monitoring your OT networks:
|Hardware profile |Appliance |Performance / Monitoring |Physical specifications | ||||| |**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|**E1800** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|**L500** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 | - > [!NOTE] > Bandwidth performance may vary depending on protocol distribution.
You can purchase any of the following appliances for your OT on-premises managem
|Hardware profile |Appliance |Max sensors |Physical specifications | |||||
-|**E1800** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E1800** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+
+For information about previously supported legacy appliances, see the [appliance catalog](/azure/defender-for-iot/organizations/appliance-catalog/).
## Next steps
-Continue understanding system requirements for physical or virtual appliances.
+Continue understanding system requirements for physical or virtual appliances.
For more information, see [Which appliances do I need?](ot-appliance-sizing.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Last updated 08/07/2022
This article serves as an archive for features and enhancements released for Microsoft Defender for IoT for organizations more than nine months ago.
-For more recent updates, see [What's new in Microsoft Defender for IoT?](release-notes.md).
+For more recent updates, see [What's new in Microsoft Defender for IoT?](whats-new.md).
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
defender-for-iot Release Notes Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-sentinel.md
The **Microsoft Defender for IoT** solution enhances the integration between Def
For more information, see: -- [What's new in Microsoft Defender for IoT?](release-notes.md)
+- [What's new in Microsoft Defender for IoT?](whats-new.md)
- [Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json) - [Tutorial: Investigate and detect threats for IoT devices](../../sentinel/iot-advanced-threat-monitoring.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json).+ ## Version 2.1 **Released**: September 2022
New features in this version include:
- New SOC playbooks for automation with CVEs, triaging incidents that involve sensitive devices, and email notifications to device owners for new incidents.
-For more information, see [Updates to the Microsoft Defender for IoT solution](release-notes.md#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub).
+For more information, see [Updates to the Microsoft Defender for IoT solution](whats-new.md#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub).
## Version 2.0
For more information, see [Updates to the Microsoft Defender for IoT solution](r
This version provides enhanced experiences for managing, installing, and updating the solution package in the Microsoft Sentinel content hub. For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](../../sentinel/sentinel-solutions-deploy.md)+ ## Version 1.0.14 **Released**: July 2022 New features in this version include: -- [Microsoft Sentinel incident synch with Defender for IoT alerts](release-notes.md#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts)
+- [Microsoft Sentinel incident synch with Defender for IoT alerts](whats-new.md#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts)
- IoT device entities displayed in related Microsoft Sentinel incidents.
For more information about earlier versions of the **Microsoft Defender for IoT*
## Next steps
-Learn more in [What's new in Microsoft Defender for IoT?](release-notes.md) and the [Microsoft Sentinel documentation](../../sentinel/index.yml).
+Learn more in [What's new in Microsoft Defender for IoT?](whats-new.md) and the [Microsoft Sentinel documentation](../../sentinel/index.yml).
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT
-description: This article lets you know what's new in the latest release of Defender for IoT.
+ Title: OT monitoring software versions - Microsoft Defender for IoT
+description: This article lists Microsoft Defender for IoT on-premises OT monitoring software versions, including release and support dates and highlights for new features.
Previously updated : 11/03/2022 Last updated : 11/22/2022
-# What's new in Microsoft Defender for IoT?
+# OT monitoring software versions
-This article lists Microsoft Defender for IoT's new features and enhancements for end-user organizations from the last nine months.
+The Microsoft Defender for IoT architecture uses on-premises sensors and management servers.
-Features released earlier than nine months ago are listed in [What's new archive for Microsoft Defender for IoT for organizations](release-notes-archive.md).
+This article lists the supported software versions for the OT sensor and on-premises management software, including release dates, support dates, and highlights for the updated features.
-Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+For more information, including detailed descriptions and updates for cloud-only features, see [What's new in Microsoft Defender for IoT?](whats-new.md) Cloud-only features aren't dependent on specific sensor versions.
## Versioning and support for on-premises software versions
-The Defender for IoT architecture uses on-premises sensors and management servers. This section describes the servicing information and timelines for the available on-premises software versions.
+This section describes the servicing information, timelines, and guidance for the available on-premises software versions.
-- **Starting in version 22.1.x**, each General Availability (GA) version of the Defender for IoT sensor and on-premises management console software is supported for nine months after its first minor release date, not including hotfix releases.
+### Version update recommendations
- Release versions have the following syntax: **[Major][Minor][Hotfix]**
+When updating your on-premises software, we recommend:
- Therefore, for example, all **22.1.x** versions, including all hotfix versions, are supported for nine months after the first **22.1.x** release.
+- Plan to **update your sensor versions to the latest version once every 6 months**.
- Fixes and new functionality are applied to each new version and aren't applied to older versions.
--- **Software update packages include new functionality and security patches**. Urgent, high-risk security updates are applied in minor versions that may be released throughout the quarter. --- **Features available from the Azure portal that are dependent on a specific sensor version** are only available for sensors that have the required version installed, or higher.-
-For more information, see the [Microsoft Security Development Lifecycle practices](https://www.microsoft.com/en-us/securityengineering/sdl/), which describes Microsoft's SDK practices, including training, compliance, threat modeling, design requirements, tools such as Microsoft Component Governance, pen testing, and more.
-
-> [!IMPORTANT]
-> Manual changes to software packages may have detrimental effects on the sensor and on-premises management console. Microsoft is unable to support deployments with manual changes made to packages.
->
-
-> [!TIP]
-> - Version numbers are listed only in this article, and not in detailed descriptions elsewhere in the documentation. To understand whether a feature is supported in your sensor version, check the listed features for that sensor version on this page.
->
-> - When updating your sensor software version, make sure to also update your on-premises management console. For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-
-**Current versions of the sensor and on-premises management console software include**:
-
-| Version | Date released | End support date |
-|--|--|--|
-| 22.2.7 | 10/2022 | 04/2023 |
-| 22.2.6 | 09/2022 | 04/2023 |
-| 22.2.5 | 08/2022 | 04/2023 |
-| 22.2.4 | 07/2022 | 04/2023 |
-| 22.2.3 | 07/2022 | 04/2023 |
-| 22.1.7 | 07/2022 | 04/2023 |
-| 22.1.6 | 06/2022 | 10/2022 |
-| 22.1.5 | 06/2022 | 10/2022 |
-| 22.1.4 | 04/2022 | 10/2022 |
-| 22.1.3 | 03/2022 | 10/2022 |
-| 22.1.1 | 02/2022 | 10/2022 |
-| 10.5.5 | 12/2021 | 09/2022 |
-| 10.5.4 | 12/2021 | 09/2022 |
-| 10.5.3 | 10/2021 | 07/2022 |
-| 10.5.2 | 10/2021 | 07/2022 |
-
-## October 2022
-
-|Service area |Updates |
-|||
-|**OT networks** | [Enhanced OT monitoring alert reference](#enhanced-ot-monitoring-alert-reference) |
-
-### Enhanced OT monitoring alert reference
-
-Our alert reference article now includes the following details for each alert:
--- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities.--- **MITRE ATT&CK for ICS tactics and techniques**, which describe the actions an adversary may take while operating within the network. Use the tactics and techniques listed for each alert to learn about the network areas that might be at risk and collaborate more efficiently across your security and OT teams more as you secure those assets.--- **Alert threshold**, for relevant alerts. Thresholds indicate the specific point at which an alert is triggered. Modify alert thresholds as needed from the sensor's **Support** page.-
-For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md), specifically [Supported alert categories](alert-engine-messages.md#supported-alert-categories).
-
-## September 2022
-
-|Service area |Updates |
-|||
-|**OT networks** |**All supported OT sensor software versions**: <br>- [Device vulnerabilities from the Azure portal](#device-vulnerabilities-from-the-azure-portal-public-preview)<br>- [Security recommendations for OT networks](#security-recommendations-for-ot-networks-public-preview)<br><br> **All OT sensor software versions 22.x**: [Updates for Azure cloud connection firewall rules](#updates-for-azure-cloud-connection-firewall-rules-public-preview) <br><br>**Sensor software version 22.2.7**: <br> - Bug fixes and stability improvements <br><br> **Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>**Microsoft Sentinel integration**: <br>- [Investigation enhancements with IoT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub)|
-
-### Security recommendations for OT networks (Public preview)
-
-Defender for IoT now provides security recommendations to help customers manage their OT/IoT network security posture. Defender for IoT recommendations help users form actionable, prioritized mitigation plans that address the unique challenges of OT/IoT networks. Use recommendations for lower your network's risk and attack surface.
-
-You can see the following security recommendations from the Azure portal for detected devices across your networks:
--- **Review PLC operating mode**. Devices with this recommendation are found with PLCs set to unsecure operating mode states. We recommend setting PLC operating modes to the **Secure Run** state if access is no longer required to the PLC to reduce the threat of malicious PLC programming.--- **Review unauthorized devices**. Devices with this recommendation must be identified and authorized as part of the network baseline. We recommend taking action to identify any indicated devices. Disconnect any devices from your network that remain unknown even after investigation to reduce the threat of rogue or potentially malicious devices.-
-Access security recommendations from one of the following locations:
--- The **Recommendations** page, which displays all current recommendations across all detected OT devices.--- The **Recommendations** tab on a device details page, which displays all current recommendations for the selected device.-
-From either location, select a recommendation to drill down further and view lists of all detected OT devices that are currently in a *healthy* or *unhealthy* state, according to the selected recommendation. From the **Unhealthy devices** or **Healthy devices** tab, select a device link to jump to the selected device details page. For example:
--
-For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
-
-### Device vulnerabilities from the Azure portal (Public preview)
-
-Defender for IoT now provides vulnerability data in the Azure portal for detected OT network devices. Vulnerability data is based on the repository of standards based vulnerability data documented at the [US government National Vulnerability Database (NVD)](https://www.nist.gov/programs-projects/national-vulnerability-database-nvd).
-
-Access vulnerability data in the Azure portal from the following locations:
--- On a device details page, select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.-
- For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
--- A new **Vulnerabilities** workbook displays vulnerability data across all monitored OT devices. Use the **Vulnerabilities** workbook to view data like CVE by severity or vendor, and full lists of detected vulnerabilities and vulnerable devices and components.-
- Select an item in the **Device vulnerabilities**, **Vulnerable devices**, or **Vulnerable components** tables to view related information in the tables on the right.
-
- For example:
-
- :::image type="content" source="media/release-notes/vulnerabilities-workbook.png" alt-text="Screenshot of a Vulnerabilities workbook in Defender for IoT.":::
-
- For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
-
-### Updates for Azure cloud connection firewall rules (Public preview)
-
-OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.
-
-For OT sensors with software versions 22.x and higher, Defender for IoT now supports increased security when adding outbound allow rules for connections to Azure. Now you can define your outbound allow rules to connect to Azure without using wildcards.
-
-When defining outbound allow rules to connect to Azure, you'll need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.
-
-For supported sensor versions, download the full list of required secure endpoints from the following locations in the Azure portal:
--- **A successful sensor registration page**: After onboarding a new OT sensor, version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.-
- For example:
-
- :::image type="content" source="media/release-notes/download-endpoints.png" alt-text="Screenshot of a successful OT sensor registration page with the download endpoints link.":::
--- **The Sites and sensors page**: Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More actions** > **Download endpoint details** to download the JSON file. For example:-
- :::image type="content" source="media/release-notes/download-endpoints-sites-sensors.png" alt-text="Screenshot of the Sites and sensors page with the download endpoint details link.":::
-
-For more information, see:
--- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)-- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Networking requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)-
-### Investigation enhancements with IoT device entities in Microsoft Sentinel
-
-Defender for IoT's integration with Microsoft Sentinel now supports an IoT device entity page. When investigating incidents and monitoring IoT security in Microsoft Sentinel, you can now identify your most sensitive devices and jump directly to more details on each device entity page.
-
-The IoT device entity page provides contextual device information about an IoT device, with basic device details and device owner contact information. Device owners are defined by site in the **Sites and sensors** page in Defender for IoT.
-
-The IoT device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example:
--
-You can also now hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name:
--
-For more information, see [Investigate further with IoT device entities](../../sentinel/iot-advanced-threat-monitoring.md#investigate-further-with-iot-device-entities) and [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
-
-### Updates to the Microsoft Defender for IoT solution in Microsoft Sentinel's content hub
-
-This month, we've released version 2.0 of the **Microsoft Defender for IoT** solution in Microsoft Sentinel's content hub, previously known as the **IoT/OT Threat Monitoring with Defender for IoT** solution.
-
-Updates in this version of the solution include:
--- **A name change**. If you'd previously installed the **IoT/OT Threat Monitoring with Defender for IoT** solution in your Microsoft Sentinel workspace, the solution is automatically renamed to **Microsoft Defender for IoT**, even if you don't update the solution.--- **Workbook improvements**: The **Defender for IoT** workbook now includes:-
- - A new **Overview** dashboard with key metrics on the device inventory, threat detection, and security posture. For example:
-
- :::image type="content" source="media/release-notes/sentinel-workbook-overview.png" alt-text="Screenshot of the new Overview tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-overview.png":::
-
- - A new **Vulnerabilities** dashboard with details about CVEs shown in your network and their related vulnerable devices. For example:
-
- :::image type="content" source="media/release-notes/sentinel-workbook-vulnerabilities.png" alt-text="Screenshot of the new Vulnerability tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-vulnerabilities.png":::
-
- - Improvements on the **Device inventory** dashboard, including access to device recommendations, vulnerabilities, and direct links to the Defender for IoT device details pages. The **Device inventory** dashboard in the **IoT/OT Threat Monitoring with Defender for IoT** workbook is fully aligned with the Defender for IoT [device inventory data](how-to-manage-device-inventory-for-organizations.md).
--- **Playbook updates**: The **Microsoft Defender for IoT** solution now supports the following SOC automation functionality with new playbooks:-
- - **Automation with CVE details**: Use the *AD4IoT-CVEAutoWorkflow* playbook to enrich incident comments with CVEs of related devices based on Defender for IoT data. The incidents are triaged, and if the CVE is critical, the asset owner is notified about the incident by email.
-
- - **Automation for email notifications to device owners**. Use the *AD4IoT-SendEmailtoIoTOwner* playbook to have a notification email automatically sent to a device's owner about new incidents. Device owners can then reply to the email to update the incident as needed. Device owners are defined at the site level in Defender for IoT.
-
- - **Automation for incidents with sensitive devices**: Use the *AD4IoT-AutoTriageIncident* playbook to automatically update an incident's severity based on the devices involved in the incident, and their sensitivity level or importance to your organization. For example, any incident involving a sensitive device can be automatically escalated to a higher severity level.
-
-For more information, see [Investigate Microsoft Defender for IoT incidents with Microsoft Sentinel](../../sentinel/iot-advanced-threat-monitoring.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json).
-
-## August 2022
-
-|Service area |Updates |
-|||
-|**OT networks** |**Sensor software version 22.2.5**: Minor version with stability improvements<br><br>**Sensor software version 22.2.4**: [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)<br><br>**Sensor software version 22.1.3**: [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview) |
-
-### New alert columns with timestamp data
-
-Starting with OT sensor version 22.2.4, Defender for IoT alerts in the Azure portal and the sensor console now show the following columns and data:
--- **Last detection**. Defines the last time the alert was detected in the network, and replaces the **Detection time** column.--- **First detection**. Defines the first time the alert was detected in the network.--- **Last activity**. Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication.-
-The **First detection** and **Last activity** columns aren't displayed by default. Add them to your **Alerts** page as needed.
-
-> [!TIP]
-> If you're also a Microsoft Sentinel user, you'll be familiar with similar data from your Log Analytics queries. The new alert columns in Defender for IoT are mapped as follows:
->
-> - The Defender for IoT **Last detection** time is similar to the Log Analytics **EndTime**
-> - The Defender for IoT **First detection** time is similar to the Log Analytics **StartTime**
-> - The Defender for IoT **Last activity** time is similar to the Log Analytics **TimeGenerated**
-For more information, see:
--- [View alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)-- [View alerts on your sensor](how-to-view-alerts.md)-- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)-
-### Sensor health from the Azure portal (Public preview)
-
-For OT sensor versions 22.1.3 and higher, you can use the new sensor health widgets and table column data to monitor sensor health directly from the **Sites and sensors** page on the Azure portal.
--
-We've also added a sensor details page, where you drill down to a specific sensor from the Azure portal. On the **Sites and sensors** page, select a specific sensor name. The sensor details page lists basic sensor data, sensor health, and any sensor settings applied.
-
-For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview) and [Sensor health message reference](sensor-health-messages.md).
-
-## July 2022
-
-|Service area |Updates |
-|||
-|**Enterprise IoT networks** | - [Enterprise IoT and Defender for Endpoint integration in GA](#enterprise-iot-and-defender-for-endpoint-integration-in-ga) |
-|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Sensor connections restored after certificate rotation](#sensor-connections-restored-after-certificate-rotation)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) <br><br>**To update to version 22.2.x**:<br>- **From version 22.1.x**, update directly to the latest **22.2.x** version<br>- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version <br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
-|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) |
-
-### Enterprise IoT and Defender for Endpoint integration in GA
-
-The Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
--- Onboard an Enterprise IoT plan directly in Defender for Endpoint. For more information, see [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md).--- Seamless integration with Microsoft Defender for Endpoint to view detected Enterprise IoT devices, and their related alerts, vulnerabilities, and recommendations in the Microsoft 365 Security portal. For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md). You can continue to use an Enterprise IoT network sensor (Public preview) and view detected Enterprise IoT devices on the Defender for IoT **Device inventory** page in the Azure portal.--- All Enterprise IoT sensors are now automatically added to the same site in Defender for IoT, named **Enterprise network**. When onboarding a new Enterprise IoT device, you only need to define a sensor name and select your subscription, without defining a site or zone.
+- Update to a **patch version only for specific bug fixes or security patches**. When working with the Microsoft support team on a specific issue, verify which patch version is recommended to resolve your issue.
> [!NOTE]
-> The Enterprise IoT network sensor and all detections remain in Public Preview.
-
-### Same passwords for cyberx_host and cyberx users
-
-During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When updating from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
-
-For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-
-### Device inventory enhancements
-
-Starting in OT sensor versions 22.2.4, you can now take the following actions from the sensor console's **Device inventory** page:
--- **Merge duplicate devices**. You may need to merge devices if the sensor has discovered separate network entities that are associated with a single, unique device. Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.--- **Delete single devices**. Now, you can delete a single device that hasn't communicated for at least 10 minutes.--- **Delete inactive devices by admin users**. Now, all admin users, in addition to the **cyberx** user, can delete inactive devices.-
-Also starting in version 22.2.4, in the sensor console's **Device inventory** page, the **Last seen** value in the device details pane is replaced by **Last activity**. For example:
--
-For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).
-
-### Enhancements for the ServiceNow integration API
-
-OT sensor version 22.2.4 provides enhancements for the `devicecves` API, which gets details about the CVEs found for a given device.
-
-Now you can add any of the following parameters to your query to fine tune your results:
--- ΓÇ£**sensorId**ΓÇ¥ - Shows results from a specific sensor, as defined by the given sensor ID.-- ΓÇ£**score**ΓÇ¥ - Determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. Default = **0**.-- ΓÇ£**deviceIds**ΓÇ¥ - A comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456**-
-For more information, see [devicecves (Get device CVEs)](api/management-integration-apis.md#devicecves-get-device-cves).
-
-### OT appliance hardware profile updates
-
-We've refreshed the naming conventions for our OT appliance hardware profiles for greater transparency and clarity.
-
-The new names reflect both the *type* of profile, including *Corporate*, *Enterprise*, and *Production line*, and also the related disk storage size.
-
-Use the following table to understand the mapping between legacy hardware profile names and the current names used in the updated software installation:
-
-|Legacy name |New name | Description |
-||||
-|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32-GB RAM<br>5.6-TB disk storage |
-|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32-GB RAM<br>1.8-TB disk storage |
-|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>500-GB disk storage |
-|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>100-GB disk storage |
-|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8-GB RAM<br>64-GB disk storage |
-
-We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1-TB disk sizes.
-
-For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
-
-### PCAP access from the Azure portal (Public preview)
-
-Now you can access the raw traffic files, known as packet capture files or PCAP files, directly from the Azure portal. This feature supports SOC or OT security engineers who want to investigate alerts from Defender for IoT or Microsoft Sentinel, without having to access each sensor separately.
--
-PCAP files are downloaded to your Azure storage.
-
-For more information, see [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md).
-
-### Bi-directional alert synch between sensors and the Azure portal (Public preview)
-
-For sensors updated to version 22.2.1, alert statuses and learn statuses are now fully synchronized between the sensor console and the Azure portal. For example, this means that you can close an alert on the Azure portal or the sensor console, and the alert status is updated in both locations.
-
-*Learn* an alert from either the Azure portal or the sensor console to ensure that it's not triggered again the next time the same network traffic is detected.
-
-The sensor console is also synchronized with an on-premises management console, so that alert statuses and learn statuses remain up-to-date across your management interfaces.
-
-For more information, see:
--- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)-- [View alerts on your sensor](how-to-view-alerts.md)-- [Manage alerts from the sensor console](how-to-manage-the-alert-event.md)-- [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)-
-### Sensor connections restored after certificate rotation
-
-Starting in version 22.2.3, after rotating your certificates, your sensor connections are automatically restored to your central manager, and you don't need to reconnect them manually.
-
-For more information, see [About certificates](how-to-deploy-certificates.md).
-
-### Support diagnostic log enhancements (Public preview)
-
-Starting in sensor version [22.1.1](#new-support-diagnostics-log), you've been able to download a diagnostic log from the sensor console to send to support when you open a ticket.
-
-Now, for locally managed sensors, you can upload that diagnostic log directly on the Azure portal.
--
-> [!TIP]
-> For cloud-connected sensors, starting from sensor version [22.1.3](#march-2022), the diagnostic log is automatically available to support when you open the ticket.
+> If you have an on-premises management console, make sure to also update your on-premises management console to the same version as your sensors.
>
-For more information, see:
--- [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)-- [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview)-
-### Improved security for uploading protocol plugins
-
-This version of the sensor provides an improved security for uploading proprietary plugins you've created using the Horizon SDK.
--
-For more information, see [Manage proprietary protocols with Horizon plugins](resources-manage-proprietary-protocols.md).
-
-### Sensor names shown in browser tabs
-
-Starting in sensor version 22.2.3, your sensor's name is displayed in the browser tab, making it easier for you to identify the sensors you're working with.
-
-For example:
--
-### Microsoft Sentinel incident synch with Defender for IoT alerts
-
-The **IoT OT Threat Monitoring with Defender for IoT** solution now ensures that alerts in Defender for IoT are updated with any related incident **Status** changes from Microsoft Sentinel.
-
-This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
-
-Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use the latest synchronization support, including the new [**AD4IoT-AutoAlertStatusSync** playbook](../../sentinel/iot-advanced-threat-monitoring.md#update-alert-statuses-in-defender-for-iot). After updating the solution, make sure that you also take the [required steps](../../sentinel/iot-advanced-threat-monitoring.md#playbook-prerequisites) to ensure that the new playbook works as expected.
-
-For more information, see:
--- [Integrate Defender for Iot and Sentinel](../../sentinel/iot-advanced-threat-monitoring.md)-- [Update alert statuses playbook](../../sentinel/iot-advanced-threat-monitoring.md#update-alert-statuses-in-defender-for-iot)-- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)-- [View alerts on your sensor](how-to-view-alerts.md)-
-## June 2022
--- **Sensor software version 22.1.6**: Minor version with maintenance updates for internal sensor components--- **Sensor software version 22.1.5**: Minor version to improve TI installation packages and software updates-
-We've also recently optimized and enhanced our documentation as follows:
--- [Updated appliance catalog for OT environments](#updated-appliance-catalog-for-ot-environments)-- [Documentation reorganization for end-user organizations](#documentation-reorganization-for-end-user-organizations)
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-### Updated appliance catalog for OT environments
-
-We've refreshed and revamped the catalog of supported appliances for monitoring OT environments. These appliances support flexible deployment options for environments of all sizes and can be used to host both the OT monitoring sensor and on-premises management consoles.
-
-Use the new pages as follows:
-
-1. **Understand which hardware model best fits your organization's needs.** For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
+### On-premises monitoring software versions
-1. **Learn about the preconfigured hardware appliances that are available to purchase, or system requirements for virtual machines.** For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
+Cloud features may be dependent on a specific sensor version. Such features are listed below for the relevant software versions, and are only available for data coming from sensors that have the required version installed, or higher.
- For more information about each appliance type, use the linked reference page, or browse through our new **Reference > OT monitoring appliances** section.
- :::image type="content" source="media/release-notes/appliance-catalog.png" alt-text="Screenshot of the new appliance catalog reference section." lightbox="media/release-notes/appliance-catalog.png":::
+| Version / Patch | Release date | Scope | Supported until |
+| - | | -- | - |
+| **22.2** | | | |
+| 22.2.7| 10/2022 | Patch | 09/2023 |
+| 22.2.6|09/2022 |Patch | 04/2023|
+|22.2.5 |08/2022 | Patch| 04/2023 |
+|22.2.4 |07/2022 |Patch |04/2023 |
+| 22.2.3| 07/2022| Major| 04/2023|
+| **22.1** | | | |
+| 22.1.7| 07/2022 |Patch | 06/2023 |
+| 22.1.6| 06/2022 |Patch |10/2022 |
+| 22.1.5| 06/2022 |Patch | 10/2022 |
+| 22.1.4|04/2022 | Patch|10/2022 |
+| 22.1.3|03/2022 |Patch | 10/2022|
+| 22.1.2| 02/2022 | Major|10/2022 |
+| **10.5** | | | |
+|10.5.5 |12/2022 |Patch | 09/2022|
+|10.5.4 |12/2022 |Patch | 09/2022|
+| 10.5.3| 10/2021 |Patch | 07/2022|
+| 10.5.2| 10/2021 | Major| 07/2022|
- Reference articles for each appliance type, including virtual appliances, include specific steps to configure the appliance for OT monitoring with Defender for IoT. Generic software installation and troubleshooting procedures are still documented in [Defender for IoT software installation](how-to-install-software.md).
+### Threat intelligence updates
-### Documentation reorganization for end-user organizations
+Threat intelligence updates are continuously available and are independent of specific sensor versions. You don't need to update your sensor version in order to get the latest threat intelligence updates.
-We recently reorganized our Defender for IoT documentation for end-user organizations, highlighting a clearer path for onboarding and getting started.
+For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
-Check out our new structure to follow through viewing devices and assets, managing alerts, vulnerabilities and threats, integrating with other services, and deploying and maintaining your Defender for IoT system.
+### Support model
-**New and updated articles include**:
+Versions **22.1.7**, **22.2.7**, and any later versions are supported for 1 year from their release. For example, version **22.2.7** was released in **October 2022** and is supported through **September 2023**.
-- [Welcome to Microsoft Defender for IoT for organizations](overview.md)-- [Microsoft Defender for IoT architecture](architecture.md)-- [Quickstart: Get started with Defender for IoT](getting-started.md)-- [Tutorial: Microsoft Defender for IoT trial setup](tutorial-onboarding.md)-- [Plan your sensor connections for OT monitoring](best-practices/plan-network-monitoring.md)-- [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md)
+Other versions use a legacy support model. For more information, see the tables and sections for each version below.
-> [!NOTE]
-> To send feedback on docs via GitHub, scroll to the bottom of the page and select the **Feedback** option for **This page**. We'd be glad to hear from you!
+> [!IMPORTANT]
+> Manual changes to software packages may have detrimental effects on the sensor and on-premises management console. Microsoft is unable to support deployments with manual changes made to software packages.
>
+### Feature documentation per versions
-## April 2022
--- [Extended device property data in the Device inventory](#extended-device-property-data-in-the-device-inventory)-
-### Extended device property data in the Device inventory
-
-**Sensor software version**: 22.1.4
-
-Starting for sensors updated to version 22.1.4, the **Device inventory** page on the Azure portal shows extended data for the following fields:
--- **Description**-- **Tags**-- **Protocols**-- **Scanner**-- **Last Activity**-
-For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
-
-## March 2022
-
-**Sensor version**: 22.1.3
--- [Use Azure Monitor workbooks with Microsoft Defender for IoT](#use-azure-monitor-workbooks-with-microsoft-defender-for-iot-public-preview)-- [IoT OT Threat Monitoring with Defender for IoT solution GA](#iot-ot-threat-monitoring-with-defender-for-iot-solution-ga)-- [Edit and delete devices from the Azure portal](#edit-and-delete-devices-from-the-azure-portal-public-preview)-- [Key state alert updates](#key-state-alert-updates-public-preview)-- [Sign out of a CLI session](#sign-out-of-a-cli-session)--
-### Use Azure Monitor workbooks with Microsoft Defender for IoT (Public preview)
-
-[Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md) provide graphs and dashboards that visually reflect your data, and are now available directly in Microsoft Defender for IoT with data from [Azure Resource Graph](../../governance/resource-graph/index.yml).
-
-In the Azure portal, use the new Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or create custom workbooks of your own.
--
-For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
-
-### IoT OT Threat Monitoring with Defender for IoT solution GA
-
-The IoT OT Threat Monitoring with Defender for IoT solution in Microsoft Sentinel is now GA. In the Azure portal, use this solution to help secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
-
-For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) and [Tutorial: Investigate Microsoft Defender for IoT devices with Microsoft Sentinel](../../sentinel/iot-advanced-threat-monitoring.md).
-
-### Edit and delete devices from the Azure portal (Public preview)
-
-The **Device inventory** page in the Azure portal now supports the ability to edit device details, such as security, classification, location, and more:
--
-For more information, see [Edit device details](how-to-manage-device-inventory-for-organizations.md#edit-device-details).
-
-You can only delete devices from Defender for IoT if they've been inactive for more than 14 days. For more information, see [Delete a device](how-to-manage-device-inventory-for-organizations.md#delete-a-device).
-
-### Key state alert updates (Public preview)
-
-Defender for IoT now supports the Rockwell protocol for PLC operating mode detections.
-
-For the Rockwell protocol, the **Device inventory** pages in both the Azure portal and the sensor console now indicate the PLC operating mode key and run state, and whether the device is currently in a secure mode.
+Version numbers are listed only in this article and in the [What's new in Microsoft Defender for IoT?](whats-new.md) article, and not in detailed descriptions elsewhere in the documentation.
-If the device's PLC operating mode is ever switched to an unsecured mode, such as *Program* or *Remote*, a **PLC Operating Mode Changed** alert is generated.
+To understand whether a feature is supported in your sensor version, check the relevant version section below and its listed features.
-For more information, see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md).
+## Versions 22.2.x
-### Sign out of a CLI session
-Starting in this version, CLI users are automatically signed out of their session after 300 inactive seconds. To sign out manually, use the new `logout` CLI command.
+To update to 22.2.x versions:
-For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+- **From version 22.1.x**, update directly to the latest **22.2.x** version
+- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version.
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-## February 2022
-
-**Sensor software version**: 22.1.1
--- [New sensor installation wizard](#new-sensor-installation-wizard)-- [Sensor redesign and unified Microsoft product experience](#sensor-redesign-and-unified-microsoft-product-experience)-- [Enhanced sensor Overview page](#enhanced-sensor-overview-page)-- [New support diagnostics log](#new-support-diagnostics-log)-- [Alert updates](#alert-updates)-- [Custom alert updates](#custom-alert-updates)-- [CLI command updates](#cli-command-updates)-- [Update to version 22.1.x](#update-to-version-221x)-- [New connectivity model and firewall requirements](#new-connectivity-model-and-firewall-requirements)-- [Protocol improvements](#protocol-improvements)-- [Modified, replaced, or removed options and configurations](#modified-replaced-or-removed-options-and-configurations)-
-### New sensor installation wizard
-
-Previously, you needed to use separate dialogs to upload a sensor activation file, verify your sensor network configuration, and configure your SSL/TLS certificates.
-
-Now, when installing a new sensor or a new sensor version, our installation wizard provides a streamlined interface to do all these tasks from a single location.
-
-For more information, see [Defender for IoT installation](how-to-install-software.md).
-
-### Sensor redesign and unified Microsoft product experience
-
-The Defender for IoT sensor console has been redesigned to create a unified Microsoft Azure experience and enhance and simplify workflows.
-
-These features are now Generally Available (GA). Updates include the general look and feel, drill-down panes, search and action options, and more. For example:
-
-**Simplified workflows include**:
--- The **Device inventory** page now includes detailed device pages. Select a device in the table and then select **View full details** on the right.-
- :::image type="content" source="media/release-notes/device-inventory-details.png" alt-text="Screenshot of the View full details button." lightbox="media/release-notes/device-inventory-details.png":::
--- Properties updated from the sensor's inventory are now automatically updated in the cloud device inventory.--- The device details pages, accessed either from the **Device map** or **Device inventory** pages, is shown as read only. To modify device properties, select **Edit properties** on the bottom-left.--- The **Data mining** page now includes reporting functionality. While the **Reports** page was removed, users with read-only access can view updates on the **Data mining page** without the ability to modify reports or settings.-
- For admin users creating new reports, you can now toggle on a **Send to CM** option to send the report to a central management console as well. For more information, see [Create a report](how-to-create-data-mining-queries.md#create-a-report).
--- The **System settings** area has been reorganized in to sections for *Basic* settings, settings for *Network monitoring*, *Sensor management*, *Integrations*, and *Import settings*.--- The sensor online help now links to key articles in the Microsoft Defender for IoT documentation.-
-**Defender for IoT maps now include**:
--- A new **Map View** is now shown for alerts and on the device details pages, showing where in your environment the alert or device is found.--- Right-click a device on the map to view contextual information about the device, including related alerts, event timeline data, and connected devices.--- To enable the ability to collapse IT networks, ensure that the **Toggle IT Networks Grouping** option is enabled. This option is now only available from the map.--- The **Simplified Map View** option has been removed.-
-We've also implemented global readiness and accessibility features to comply with Microsoft standards. In the on-premises sensor console, these updates include both high contrast and regular screen display themes and localization for over 15 languages.
-
-For example:
--
-Access global readiness and accessibility options from the **Settings** icon at the top-right corner of your screen:
--
-### Enhanced sensor Overview page
-
-The Defender for IoT sensor portal's **Dashboard** page has been renamed as **Overview**, and now includes data that better highlights system deployment details, critical network monitoring health, top alerts, and important trends and statistics.
--
-The Overview page also now serves as a *black box* to view your overall sensor status in case your outbound connections, such as to the Azure portal, go down.
-
-Create more dashboards using the **Trends & Statistics** page, located under the **Analyze** menu on the left.
-
-### New support diagnostics log
-
-Now you can get a summary of the log and system information that gets added to your support tickets. In the **Backup and Restore** dialog, select **Support Ticket Diagnostics**.
--
-### Alert updates
-
-**In the Azure portal**:
-
-Alerts are now available in Defender for IoT in the Azure portal. Work with alerts to enhance the security and operation of your IoT/OT network.
-
-The new **Alerts** page is currently in Public Preview, and provides:
--- An aggregated, real-time view of threats detected by network sensors.-- Remediation steps for devices and network processes.-- Streaming alerts to Microsoft Sentinel and empower your SOC team.-- Alert storage for 90 days from the time they're first detected.-- Tools to investigate source and destination activity, alert severity and status, MITRE ATT&CK information, and contextual information about the alert.-
-For example:
--
-**On the sensor console**:
-
-On the sensor console, the **Alerts** page now shows details for alerts detected by sensors that are configured with a cloud-connection to Defender for IoT on Azure. Users working with alerts in both Azure and on-premises should understand how alerts are managed between the Azure portal and the on-premises components.
--
-Other alert updates include:
--- **Access contextual data** for each alert, such as events that occurred around the same time, or a map of connected devices. Maps of connected devices are available for sensor console alerts only.--- **Alert statuses** are updated, and, for example, now include a *Closed* status instead of *Acknowledged*.--- **Alert storage** for 90 days from the time that they're first detected.--- The **Backup Activity with Antivirus Signatures Alert**. This new alert warning is triggered for traffic detected between a source device and destination backup server, which is often legitimate backup activity. Critical or major malware alerts are no longer triggered for such activity.--- **During upgrades**, sensor console alerts that are currently archived are deleted. Pinned alerts are no longer supported, so pins are removed for sensor console alerts as relevant.-
-### Custom alert updates
-
-The sensor console's **Custom alert rules** page now provides:
--- Hit count information in the **Custom alert rules** table, with at-a-glance details about the number of alerts triggered in the last week for each rule you've created.--- The ability to schedule custom alert rules to run outside of regular working hours.--- The ability to alert on any field that can be extracted from a protocol using the DPI engine.--- Complete protocol support when creating custom rules, and support for an extensive range of related protocol variables.-
- :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png" alt-text="Screenshot of the updated Custom alerts dialog. "lightbox="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png":::
-
-For more information and the updated custom alert procedure, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
-
-### CLI command updates
+### 22.2.7
-The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
+**Release date**: 10/2022
-This *cyberx_host* user is available by default and connects to the host machine. If you need to, recover the password for the *cyberx_host* user from the **Sites and sensors** page in Defender for IoT.
+**Supported until**: 09/2023
-As part of the containerized sensor, the following CLI commands have been modified:
+This version includes bug fixes for stability improvements.
-|Legacy name |Replacement |
-|||
-|`cyberx-xsense-reconfigure-interfaces` |`sudo dpkg-reconfigure iot-sensor` |
-|`cyberx-xsense-reload-interfaces` | `sudo dpkg-reconfigure iot-sensor` |
-|`cyberx-xsense-reconfigure-hostname` | `sudo dpkg-reconfigure iot-sensor` |
-| `cyberx-xsense-system-remount-disks` |`sudo dpkg-reconfigure iot-sensor` |
+### 22.2.6
-The `sudo cyberx-xsense-limit-interface-I eth0 -l value` CLI command was removed. This command was used to limit the interface bandwidth that the sensor uses for day-to-day procedures, and is no longer supported.
+**Release date**: 09/2022
-For more information, see [Defender for IoT installation](how-to-install-software.md) and [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+**Supported until**: 04/2023
-### Update to version 22.1.x
+This version includes the following new updates and fixes:
-To use all of Defender for IoT's latest features, make sure to update your sensor software versions to 22.1.x.
+- Bug fixes and stability improvements
+- Enhancements to the device type classification algorithm
-If you're on a legacy version, you may need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and reactivate your sensor with a new activation file.
+### 22.2.5
-After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+**Release date**: 08/2022
-For more information, see [Update OT system software](update-ot-software.md).
+**Supported until**: 04/2023
-> [!NOTE]
-> Upgrading to version 22.1.x is a large update, and you should expect the update process to require more time than previous updates.
->
+This version includes minor stability improvements.
-### New connectivity model and firewall requirements
+### 22.2.4
-Defender for IoT version 22.1.x supports a new set of sensor connection methods that provide simplified deployment, improved security, scalability, and flexible connectivity.
+**Release date**: 07/2022
-In addition to [migration steps](connect-sensors.md#migration-for-existing-customers), this new connectivity model requires that you open a new firewall rule. For more information, see:
+**Supported until**: 04/2023
-- **New firewall requirements**: [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).-- **Architecture**: [Sensor connection methods](architecture-connections.md)-- **Connection procedures**: [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
+This version includes the following new updates and fixes:
-### Protocol improvements
+- [Device inventory enhancements in the sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md):
-This version of Defender for IoT provides improved support for:
+ - Merge duplicate devices, delete single devices, and delete inactive devices by admin users
+ - **Last seen** value in the device details pane is replaced by **Last activity**
-- Profinet DCP-- Honeywell-- Windows endpoint detection
+- [New parameters for the *devicecves* API](api/management-integration-apis.md): `sensorId`, `score`, and `deviceIds`
-### Modified, replaced, or removed options and configurations
+- [New alert columns with timestamp data](how-to-view-alerts.md): **Last detection**, **First detection**, and **Last activity**
-The following Defender for IoT options and configurations have been moved, removed, and/or replaced:
+### 22.2.3
-- Reports previously found on the **Reports** page are now shown on the **Data Mining** page instead. You can also continue to view data mining information directly from the on-premises management console.
+**Release date**: 07/2022
-- Changing a locally managed sensor name is now supported only by onboarding the sensor to the Azure portal again with the new name. Sensor names can no longer be changed directly from the sensor. For more information, see [Change the name of a sensor](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor).
+**Supported until**: 04/2023
+This version includes the following new updates and fixes:
-## December 2021
+- [New naming convention for hardware profiles](ot-appliance-sizing.md)
+- [PCAP access from the Azure portal](how-to-manage-cloud-alerts.md)
+- [Bi-directional alert synch between sensors and the Azure portal](how-to-manage-cloud-alerts.md#managing-alerts-in-a-hybrid-deployment)
+- [Sensor connections restored after certificate rotation](how-to-deploy-certificates.md)
+- [Upload diagnostic logs for support tickets from the Azure portal](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview)
+- [Improved security for uploading protocol plugins](resources-manage-proprietary-protocols.md)
+- [Sensor names shown in browser tabs](how-to-manage-individual-sensors.md)
-**Sensor software version**: 10.5.4
+## Versions 22.1.x
-- [Enhanced integration with Microsoft Sentinel (Preview)](#enhanced-integration-with-microsoft-sentinel-preview)-- [Apache Log4j vulnerability](#apache-log4j-vulnerability)-- [Alerting](#alerting)
+Software versions 22.1.x support direct updates to the latest OT monitoring software versions available. For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-### Enhanced integration with Microsoft Sentinel (Preview)
+### 22.1.7
-The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
+**Release date**: 07/2022
-For information on integrating with Microsoft Sentinel, see [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../../sentinel/iot-solution.md) and [Tutorial: Investigate and detect threats for IoT devices](../../sentinel/iot-advanced-threat-monitoring.md).
+**Supported until**: 06/2023
-### Apache Log4j vulnerability
+This version includes the following new updates and fixes:
-Version 10.5.4 of Microsoft Defender for IoT mitigates the Apache Log4j vulnerability. For details, see [the security advisory update](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/updated-15-dec-defender-for-iot-security-advisory-apache-log4j/m-p/3036844).
+- [Identical passwords for *cyberx_host* and *cyberx* users created during installations and updates](how-to-install-software.md)
-### Alerting
+### 22.1.6
-Version 10.5.4 of Microsoft Defender for IoT delivers important alert enhancements:
+**Release date**: 06/2022
-- Alerts for certain minor events or edge-cases are now disabled.-- For certain scenarios, similar alerts are minimized in a single alert message.
+**Supported until**: 10/2022
-These changes reduce alert volume and enable more efficient targeting and analysis of security and operational events.
+This version minor maintenance updates for internal sensor components.
-#### Alerts permanently disabled
+### 22.1.5
-The alerts listed below are permanently disabled with version 10.5.4. Detection and monitoring are still supported for traffic associated with the alerts.
+**Release date**: 06/2022
-**Policy engine alerts**
+**Supported until**: 10/2022
-- RPC Procedure Invocations-- Unauthorized HTTP Server-- Abnormal usage of MAC Addresses
+This version minor updates to improve TI installation packages and software updates.
-#### Alerts disabled by default
+### 22.1.4
-The alerts listed below are disabled by default with version 10.5.4. You can re-enable the alerts from the Support page of the sensor console, if necessary.
+**Release date**: 04/2022
-**Anomaly engine alert**
-- Abnormal Number of Parameters in HTTP Header-- Abnormal HTTP Header Length-- Illegal HTTP Header Content
+**Supported until**: 10/2022
-**Operational engine alerts**
-- HTTP Client Error-- RPC Operation Failed
+This version includes the following new updates and fixes:
-**Policy engine alerts**
+- [Extended device property data in the **Device inventory** page on the Azure portal](how-to-manage-device-inventory-for-organizations.md), for the **Description**, **Tags**. **Protocols**, **Scanner**, and **Last Activity** fields
-Disabling these alerts also disables monitoring of related traffic. Specifically, this traffic won't be reported in Data Mining reports.
+### 22.1.3
-- Illegal HTTP Communication alert and HTTP Connections Data Mining traffic-- Unauthorized HTTP User Agent alert and HTTP User Agents Data Mining traffic-- Unauthorized HTTP SOAP Action and HTTP SOAP Actions Data Mining traffic
+**Release date**: 03/2022
-#### Updated alert functionality
+**Supported until**: 10/2022
-**Unauthorized Database Operation alert**
-Previously, this alert covered DDL and DML alerting and Data Mining reporting. Now:
-- DDL traffic: alerting and monitoring are supported.-- DML traffic: Monitoring is supported. Alerting isn't supported.
+This version includes the following new updates and fixes:
-**New Asset Detected alert**
-This alert is disabled for new devices detected in IT subnets. The New Asset Detected alert is still triggered for new devices discovered in OT subnets. OT subnets are detected automatically and can be updated by users if necessary.
+- [Diagnostic logs automatically available to support for cloud-connected sensors](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+- [Rockwell protocol: Device inventory shows PLC operating mode key state, run state, and security mode](how-to-manage-device-inventory-for-organizations.md)
+- [Automatic CLI session timeouts](references-work-with-defender-for-iot-cli-commands.md)
+- [Sensor health widgets in the Azure portal](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview)
-### Minimized alerting
+### 22.1.1
-Alert triggering for specific scenarios has been minimized to help reduce alert volume and simplify alert investigation. In these scenarios, if a device performs repeated activity on targets, an alert is triggered once. Previously, a new alert was triggered each time the same activity was carried out.
+**Release date**: 02/2022
-This new functionality is available on the following alerts:
+**Supported until**: 10/2022
-- Port Scan Detected alerts, based on activity of the source device (generated by the Anomaly engine)-- Malware alerts, based on activity of the source device. (generated by the Malware engine). -- Suspicion of Denial of Service Attack alerts, based on activity of the destination device (generated by the Malware engine)
+This version includes the following new updates and fixes:
-## November 2021
+- [New sensor installation wizard](how-to-install-software.md)
-**Sensor software version**: 10.5.3
+- [Sensor redesign and unified Microsoft product experience](how-to-manage-individual-sensors.md)
-The following feature enhancements are available with version 10.5.3 of Microsoft Defender for IoT.
+- [Enhanced sensor Overview page](how-to-manage-individual-sensors.md)
-- The on-premises management console, has a new ServiceNow integration API. For more information, see [Integration API reference for on-premises management consoles (Public preview)](api/management-integration-apis.md).
+- [New sensor diagnostics log](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
-- Enhancements have been made to the network traffic analysis of multiple OT and ICS protocol dissectors.
+- [Alert updates](how-to-view-alerts.md):
-- As part of our automated maintenance, archived alerts that are over 90 days old will now be automatically deleted.
+ - Contextual data for each alert
+ - Refreshed alert statuses
+ - Alert storage updates
+ - A new **Backup Activity with Antivirus Signatures** alert
+ - Alert management changes during software updates
-- Many enhancements have been made to the exporting of alert metadata based on customer feedback.
+- [Enhancements for creating custom alerts on the sensor](how-to-accelerate-alert-incident-response.md#customize-alert-rules): Hit count data, advanced scheduling options, and more supported fields and protocols
-## October 2021
+- [Modified CLI commands](references-work-with-defender-for-iot-cli-commands.md): Including the following new commands:
-**Sensor software version**: 10.5.2
+ - `sudo dpkg-reconfigure iot-sensor`
+ - `sudo dpkg-reconfigure iot-sensor`
+ - `sudo dpkg-reconfigure iot-sensor`
-The following feature enhancements are available with version 10.5.2 of Microsoft Defender for IoT.
+- [Refreshed update process and update log](update-ot-software.md)
-- [PLC operating mode detections (Public Preview)](#plc-operating-mode-detections-public-preview)
+- [New connectivity models](architecture-connections.md)
-- [PCAP API](#pcap-api)
+- [New firewall requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)
-- [On-premises Management Console Audit](#on-premises-management-console-audit)
+- [Improved support for Profinet DCP, Honeywell, and Windows endpoint detection protocols](concept-supported-protocols.md)
-- [Webhook Extended](#webhook-extended)
+- [Sensor reports now accessible from the **Data Mining** page](how-to-create-data-mining-queries.md)
-- [Unicode support for certificate passphrases](#unicode-support-for-certificate-passphrases)
+- [Updated process for sensor name changes](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor)
-### PLC operating mode detections (Public Preview)
+## Versions 10.5.x
-Users can now view PLC operating mode states, changes, and risks. The PLC Operating mode consists of the PLC logical Run state and the physical Key state, if a physical key switch exists on the PLC.
+To update your software to the latest version available, first update to version 22.1.7, and then update again to the latest 22.2.x version. For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
-This new capability helps improve security by detecting *unsecure* PLCs, and as a result prevents malicious attacks such as PLC Program Downloads. The 2017 Triton attack on a petrochemical plant illustrates the effects of such risks.
-This information also provides operational engineers with critical visibility into the operational mode of enterprise PLCs.
+### 10.5.5
-#### What is an unsecure mode?
+**Release date**: 12/2022
-If the Key state is detected as *Program* or the *Run state* is detected as either *Remote* or *Program*, the PLC is defined by Defender for IoT as *unsecure*.
+**Supported until**: 9/2022
-#### Visibility and risk assessment
+This version minor maintenance updates.
-- Use the Device Inventory to view the PLC state of organizational PLCs, and contextual device information. Use the Device Inventory Settings dialog box to add this column to the Inventory.
+### 10.5.4
- :::image type="content" source="media/release-notes/device-inventory-plc.png" alt-text="Device inventory showing PLC operating mode.":::
+**Release date**: 12/2021
-- View PLC secure status and last change information per PLC in the Attributes section of the Device Properties screen. If the *Key state* is detected as *Program* or the *Run state* is detected as either *Remote* or *Program*, the PLC is defined by Defender for IoT as *unsecure*. The Device Properties PLC Secured option will read false.
+**Supported until**: 09/2022
- :::image type="content" source="media/release-notes/attributes-plc.png" alt-text="Attributes screen showing PLC information.":::
+This version includes the following new updates and fixes:
-- View all network PLC Run and Key State statuses by creating a Data Mining with PLC operating mode information.
+- [New Microsoft Sentinel solution for Defender for IoT](../../sentinel/iot-solution.md)
+- [Mitigation for the Apache Log4j vulnerability](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/updated-15-dec-defender-for-iot-security-advisory-apache-log4j/m-p/3036844)
+- [Alerts for minor events and edge cases disabled or minimized](alert-engine-messages.md)
- :::image type="content" source="media/release-notes/data-mining-plc.png" alt-text="Data inventory screen showing PLC option.":::
+### 10.5.3
-- Use the Risk Assessment Report to review the number of network PLCs in the unsecure mode, and additional information you can use to mitigate unsecure PLC risks.
+**Release date**: 10/2021
-### PCAP API
+**Supported until**: 07/2022
-The new PCAP API lets the user retrieve PCAP files from the sensor via the on-premises management console with, or without direct access to the sensor itself.
+This version includes the following new updates and fixes:
-### On-premises Management Console audit
+- [New integration APIs](api/management-integration-apis.md)
+- [Network traffic analysis enhancements for multiple OT and ICS protocols](concept-supported-protocols.md)
+- [Automatic deletion for older, archived alerts](how-to-view-alerts.md)
+- [Export alert enhancements](how-to-work-with-alerts-on-premises-management-console.md#export-alert-information)
-Audit logs for the on-premises management console can now be exported to facilitate investigations into what changes were made, and by who.
+### 10.5.2
-### Webhook extended
+**Release date**: 10/2021
-Webhook extended can be used to send extra data to the endpoint. The extended feature includes all of the information in the Webhook alert and adds the following information to the report:
+**Supported until**: 07/2022
-- sensorID-- sensorName-- zoneID-- zoneName-- siteID-- siteName-- sourceDeviceAddress-- destinationDeviceAddress-- remediationSteps-- handled-- additionalInformation
+This version includes the following new updates and fixes:
-### Unicode support for certificate passphrases
+- [PLC operating mode detections](how-to-create-risk-assessment-reports.md)
+- [New PCAP API](api/management-alert-apis.md#pcap-request-alert-pcap)
+- [On-premises management console audit](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-audit-logs-for-troubleshooting)
+- [Support for Webhook extended to send data to endpoints](how-to-forward-alert-information-to-partners.md#webhook-extended)
+- [Unicode support for certificate passphrases](how-to-deploy-certificates.md)
-Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [Certificates for appliance encryption and authentication (OT appliances)](how-to-deploy-certificates.md#certificates-for-appliance-encryption-and-authentication-ot-appliances).
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+For more information about the features listed in this article, see [What's new in Microsoft Defender for IoT?](whats-new.md) and [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md).
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
This article describes how to update Defender for IoT software versions on OT se
You can purchase preconfigured appliances for your sensors and on-premises management consoles, or install software on your own hardware machines. In either case, you'll need to update software versions to use new features for OT sensors and on-premises management consoles.
-For more information, see [Which appliances do I need?](ot-appliance-sizing.md), [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), and [What's new in Microsoft Defender for IoT?](release-notes.md).
+For more information, see [Which appliances do I need?](ot-appliance-sizing.md), [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), and [OT monitoring software release notes](release-notes.md).
## Legacy version updates vs. recent version updates
In such cases, make sure to update your on-premises management consoles *before*
You can update software on your sensors individually, directly from each sensor console, or in bulk from the on-premises management console. Select one of the following tabs for the steps required in each method. > [!NOTE]
-> If you are updating from software versions earlier than [22.1.x](release-notes.md#update-to-version-221x), note that this version has a large update with more complicated background processes. Expect this update to take more time than earlier updates have required.
+> If you are updating from software versions earlier than [22.1.x](whats-new.md#update-to-version-221x), note that this version has a large update with more complicated background processes. Expect this update to take more time than earlier updates have required.
> > [!IMPORTANT]
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
+
+ Title: What's new in Microsoft Defender for IoT
+description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal.
+ Last updated : 09/15/2022++
+# What's new in Microsoft Defender for IoT?
+
+This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, both on-premises and in the Azure portal, and for versions released in the last nine months.
+
+Features released earlier than nine months ago are described in the [What's new archive for Microsoft Defender for IoT for organizations](release-notes-archive.md). For more information specific to OT monitoring software versions, see [OT monitoring software release notes](release-notes.md).
+
+> [!NOTE]
+> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## November 2022
+
+|Service area |Updates |
+|||
+|**OT networks** | [New OT monitoring software release notes](#new-ot-monitoring-software-release-notes) |
+
+### New OT monitoring software release notes
+
+Defender for IoT documentation now has a new [release notes](release-notes.md) page dedicated to OT monitoring software, with details about our version support models and update recommendations.
++
+We continue to update this article, our main **What's new** page, with new features and enhancements for both OT and Enterprise IoT networks. New items listed include both on-premises and cloud features, and are listed by month.
+
+In contrast, the new [OT monitoring software release notes](release-notes.md) lists only OT network monitoring updates that require you to update your on-premises software. Items are listed by major and patch versions, with an aggregated table of versions, dates, and scope.
+
+For more information, see [OT monitoring software release notes](release-notes.md).
+
+## October 2022
+
+|Service area |Updates |
+|||
+|**OT networks** | [Enhanced OT monitoring alert reference](#enhanced-ot-monitoring-alert-reference) |
+
+### Enhanced OT monitoring alert reference
+
+Our alert reference article now includes the following details for each alert:
+
+- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities
+
+- **Alert threshold**, for relevant alerts. Thresholds indicate the specific point at which an alert is triggered. The *cyberx* user can modify alert thresholds as needed from the sensor's **Support** page.
+
+For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md), specifically [Supported alert categories](alert-engine-messages.md#supported-alert-categories).
+
+## September 2022
+
+|Service area |Updates |
+|||
+|**OT networks** |**All supported OT sensor software versions**: <br>- [Device vulnerabilities from the Azure portal](#device-vulnerabilities-from-the-azure-portal-public-preview)<br>- [Security recommendations for OT networks](#security-recommendations-for-ot-networks-public-preview)<br><br> **All OT sensor software versions 22.x**: [Updates for Azure cloud connection firewall rules](#updates-for-azure-cloud-connection-firewall-rules-public-preview) <br><br>**Sensor software version 22.2.7**: <br> - Bug fixes and stability improvements <br><br> **Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>**Microsoft Sentinel integration**: <br>- [Investigation enhancements with IoT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub)|
+
+### Security recommendations for OT networks (Public preview)
+
+Defender for IoT now provides security recommendations to help customers manage their OT/IoT network security posture. Defender for IoT recommendations help users form actionable, prioritized mitigation plans that address the unique challenges of OT/IoT networks. Use recommendations for lower your network's risk and attack surface.
+
+You can see the following security recommendations from the Azure portal for detected devices across your networks:
+
+- **Review PLC operating mode**. Devices with this recommendation are found with PLCs set to unsecure operating mode states. We recommend setting PLC operating modes to the **Secure Run** state if access is no longer required to the PLC to reduce the threat of malicious PLC programming.
+
+- **Review unauthorized devices**. Devices with this recommendation must be identified and authorized as part of the network baseline. We recommend taking action to identify any indicated devices. Disconnect any devices from your network that remain unknown even after investigation to reduce the threat of rogue or potentially malicious devices.
+
+Access security recommendations from one of the following locations:
+
+- The **Recommendations** page, which displays all current recommendations across all detected OT devices.
+
+- The **Recommendations** tab on a device details page, which displays all current recommendations for the selected device.
+
+From either location, select a recommendation to drill down further and view lists of all detected OT devices that are currently in a *healthy* or *unhealthy* state, according to the selected recommendation. From the **Unhealthy devices** or **Healthy devices** tab, select a device link to jump to the selected device details page. For example:
++
+For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+
+### Device vulnerabilities from the Azure portal (Public preview)
+
+Defender for IoT now provides vulnerability data in the Azure portal for detected OT network devices. Vulnerability data is based on the repository of standards based vulnerability data documented at the [US government National Vulnerability Database (NVD)](https://www.nist.gov/programs-projects/national-vulnerability-database-nvd).
+
+Access vulnerability data in the Azure portal from the following locations:
+
+- On a device details page select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.
+
+ For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+
+- A new **Vulnerabilities** workbook displays vulnerability data across all monitored OT devices. Use the **Vulnerabilities** workbook to view data like CVE by severity or vendor, and full lists of detected vulnerabilities and vulnerable devices and components.
+
+ Select an item in the **Device vulnerabilities**, **Vulnerable devices**, or **Vulnerable components** tables to view related information in the tables on the right.
+
+ For example:
+
+ :::image type="content" source="media/release-notes/vulnerabilities-workbook.png" alt-text="Screenshot of a Vulnerabilities workbook in Defender for IoT.":::
+
+ For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
+
+### Updates for Azure cloud connection firewall rules (Public preview)
+
+OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.
+
+For OT sensors with software versions 22.x and higher, Defender for IoT now supports increased security when adding outbound allow rules for connections to Azure. Now you can define your outbound allow rules to connect to Azure without using wildcards.
+
+When defining outbound allow rules to connect to Azure, you'll need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.
+
+For supported sensor versions, download the full list of required secure endpoints from the following locations in the Azure portal:
+
+- **A successful sensor registration page**: After onboarding a new OT sensor, version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.
+
+ For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints.png" alt-text="Screenshot of a successful OT sensor registration page with the download endpoints link.":::
+
+- **The Sites and sensors page**: Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More actions** > **Download endpoint details** to download the JSON file. For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints-sites-sensors.png" alt-text="Screenshot of the Sites and sensors page with the download endpoint details link.":::
+
+For more information, see:
+
+- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)
+- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
+- [Networking requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)
+
+### Investigation enhancements with IoT device entities in Microsoft Sentinel
+
+Defender for IoT's integration with Microsoft Sentinel now supports an IoT device entity page. When investigating incidents and monitoring IoT security in Microsoft Sentinel, you can now identify your most sensitive devices and jump directly to more details on each device entity page.
+
+The IoT device entity page provides contextual device information about an IoT device, with basic device details and device owner contact information. Device owners are defined by site in the **Sites and sensors** page in Defender for IoT.
+
+The IoT device entity page can help prioritize remediation based on device importance and business impact, as per each alert's site, zone, and sensor. For example:
++
+You can also now hunt for vulnerable devices on the Microsoft Sentinel **Entity behavior** page. For example, view the top five IoT devices with the highest number of alerts, or search for a device by IP address or device name:
++
+For more information, see [Investigate further with IoT device entities](../../sentinel/iot-advanced-threat-monitoring.md#investigate-further-with-iot-device-entities) and [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
+
+### Updates to the Microsoft Defender for IoT solution in Microsoft Sentinel's content hub
+
+This month, we've released version 2.0 of the **Microsoft Defender for IoT** solution in Microsoft Sentinel's content hub, previously known as the **IoT/OT Threat Monitoring with Defender for IoT** solution.
+
+Updates in this version of the solution include:
+
+- **A name change**. If you'd previously installed the **IoT/OT Threat Monitoring with Defender for IoT** solution in your Microsoft Sentinel workspace, the solution is automatically renamed to **Microsoft Defender for IoT**, even if you don't update the solution.
+
+- **Workbook improvements**: The **Defender for IoT** workbook now includes:
+
+ - A new **Overview** dashboard with key metrics on the device inventory, threat detection, and security posture. For example:
+
+ :::image type="content" source="media/release-notes/sentinel-workbook-overview.png" alt-text="Screenshot of the new Overview tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-overview.png":::
+
+ - A new **Vulnerabilities** dashboard with details about CVEs shown in your network and their related vulnerable devices. For example:
+
+ :::image type="content" source="media/release-notes/sentinel-workbook-vulnerabilities.png" alt-text="Screenshot of the new Vulnerability tab in the IoT OT Threat Monitoring with Defender for IoT workbook." lightbox="media/release-notes/sentinel-workbook-vulnerabilities.png":::
+
+ - Improvements on the **Device inventory** dashboard, including access to device recommendations, vulnerabilities, and direct links to the Defender for IoT device details pages. The **Device inventory** dashboard in the **IoT/OT Threat Monitoring with Defender for IoT** workbook is fully aligned with the Defender for IoT [device inventory data](how-to-manage-device-inventory-for-organizations.md).
+
+- **Playbook updates**: The **Microsoft Defender for IoT** solution now supports the following SOC automation functionality with new playbooks:
+
+ - **Automation with CVE details**: Use the *AD4IoT-CVEAutoWorkflow* playbook to enrich incident comments with CVEs of related devices based on Defender for IoT data. The incidents are triaged, and if the CVE is critical, the asset owner is notified about the incident by email.
+
+ - **Automation for email notifications to device owners**. Use the *AD4IoT-SendEmailtoIoTOwner* playbook to have a notification email automatically sent to a device's owner about new incidents. Device owners can then reply to the email to update the incident as needed. Device owners are defined at the site level in Defender for IoT.
+
+ - **Automation for incidents with sensitive devices**: Use the *AD4IoT-AutoTriageIncident* playbook to automatically update an incident's severity based on the devices involved in the incident, and their sensitivity level or importance to your organization. For example, any incident involving a sensitive device can be automatically escalated to a higher severity level.
+
+For more information, see [Investigate Microsoft Defender for IoT incidents with Microsoft Sentinel](../../sentinel/iot-advanced-threat-monitoring.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json).
+
+## August 2022
+
+|Service area |Updates |
+|||
+|**OT networks** |**Sensor software version 22.2.5**: Minor version with stability improvements<br><br>**Sensor software version 22.2.4**: [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data)<br><br>**Sensor software version 22.1.3**: [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview) |
+
+### New alert columns with timestamp data
+
+Starting with OT sensor version 22.2.4, Defender for IoT alerts in the Azure portal and the sensor console now show the following columns and data:
+
+- **Last detection**. Defines the last time the alert was detected in the network, and replaces the **Detection time** column.
+
+- **First detection**. Defines the first time the alert was detected in the network.
+
+- **Last activity**. Defines the last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication.
+
+The **First detection** and **Last activity** columns aren't displayed by default. Add them to your **Alerts** page as needed.
+
+> [!TIP]
+> If you're also a Microsoft Sentinel user, you'll be familiar with similar data from your Log Analytics queries. The new alert columns in Defender for IoT are mapped as follows:
+>
+> - The Defender for IoT **Last detection** time is similar to the Log Analytics **EndTime**
+> - The Defender for IoT **First detection** time is similar to the Log Analytics **StartTime**
+> - The Defender for IoT **Last activity** time is similar to the Log Analytics **TimeGenerated**
+For more information, see:
+
+- [View alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)
+- [View alerts on your sensor](how-to-view-alerts.md)
+- [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md)
+
+### Sensor health from the Azure portal (Public preview)
+
+For OT sensor versions 22.1.3 and higher, you can use the new sensor health widgets and table column data to monitor sensor health directly from the **Sites and sensors** page on the Azure portal.
++
+We've also added a sensor details page, where you drill down to a specific sensor from the Azure portal. On the **Sites and sensors** page, select a specific sensor name. The sensor details page lists basic sensor data, sensor health, and any sensor settings applied.
+
+For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview) and [Sensor health message reference](sensor-health-messages.md).
+
+## July 2022
+
+|Service area |Updates |
+|||
+|**Enterprise IoT networks** | - [Enterprise IoT and Defender for Endpoint integration in GA](#enterprise-iot-and-defender-for-endpoint-integration-in-ga) |
+|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Sensor connections restored after certificate rotation](#sensor-connections-restored-after-certificate-rotation)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) |
+|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) |
+
+### Enterprise IoT and Defender for Endpoint integration in GA
+
+The Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
+
+- Onboard an Enterprise IoT plan directly in Defender for Endpoint. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+
+- Seamless integration with Microsoft Defender for Endpoint to view detected Enterprise IoT devices, and their related alerts, vulnerabilities, and recommendations in the Microsoft 365 Security portal. For more information, see the [Enterprise IoT tutorial](tutorial-getting-started-eiot-sensor.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration). You can continue to view detected Enterprise IoT devices on the Defender for IoT **Device inventory** page in the Azure portal.
+
+- All Enterprise IoT sensors are now automatically added to the same site in Defender for IoT, named **Enterprise network**. When onboarding a new Enterprise IoT device, you only need to define a sensor name and select your subscription, without defining a site or zone.
+
+> [!NOTE]
+> The Enterprise IoT network sensor and all detections remain in Public Preview.
+
+### Same passwords for cyberx_host and cyberx users
+
+During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When updating from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
+
+For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [Update Defender for IoT OT monitoring software](update-ot-software.md).
+
+### Device inventory enhancements
+
+Starting in OT sensor versions 22.2.4, you can now take the following actions from the sensor console's **Device inventory** page:
+
+- **Merge duplicate devices**. You may need to merge devices if the sensor has discovered separate network entities that are associated with a single, unique device. Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.
+
+- **Delete single devices**. Now, you can delete a single device that hasn't communicated for at least 10 minutes.
+
+- **Delete inactive devices by admin users**. Now, all admin users, in addition to the **cyberx** user, can delete inactive devices.
+
+Also starting in version 22.2.4, in the sensor console's **Device inventory** page, the **Last seen** value in the device details pane is replaced by **Last activity**. For example:
++
+For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).
+
+### Enhancements for the ServiceNow integration API
+
+OT sensor version 22.2.4 provides enhancements for the `devicecves` API, which gets details about the CVEs found for a given device.
+
+Now you can add any of the following parameters to your query to fine tune your results:
+
+- ΓÇ£**sensorId**ΓÇ¥ - Shows results from a specific sensor, as defined by the given sensor ID.
+- ΓÇ£**score**ΓÇ¥ - Determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. Default = **0**.
+- ΓÇ£**deviceIds**ΓÇ¥ - A comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456**
+
+For more information, see [Integration API reference for on-premises management consoles (Public preview)](api/management-integration-apis.md).
+
+### OT appliance hardware profile updates
+
+We've refreshed the naming conventions for our OT appliance hardware profiles for greater transparency and clarity.
+
+The new names reflect both the *type* of profile, including *Corporate*, *Enterprise*, and *Production line*, and also the related disk storage size.
+
+Use the following table to understand the mapping between legacy hardware profile names and the current names used in the updated software installation:
+
+|Legacy name |New name | Description |
+||||
+|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32 GB RAM<br>5.6 TB disk storage |
+|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32 GB RAM<br>1.8 TB disk storage |
+|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>500 GB disk storage |
+|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>100 GB disk storage |
+|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>64 GB disk storage |
+
+We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1 TB disk sizes.
+
+For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
+
+### PCAP access from the Azure portal (Public preview)
+
+Now you can access the raw traffic files, known as packet capture files or PCAP files, directly from the Azure portal. This feature supports SOC or OT security engineers who want to investigate alerts from Defender for IoT or Microsoft Sentinel, without having to access each sensor separately.
++
+PCAP files are downloaded to your Azure storage.
+
+For more information, see [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md).
+
+### Bi-directional alert synch between sensors and the Azure portal (Public preview)
+
+For sensors updated to version 22.2.1, alert statuses and learn statuses are now fully synchronized between the sensor console and the Azure portal. For example, this means that you can close an alert on the Azure portal or the sensor console, and the alert status is updated in both locations.
+
+*Learn* an alert from either the Azure portal or the sensor console to ensure that it's not triggered again the next time the same network traffic is detected.
+
+The sensor console is also synchronized with an on-premises management console, so that alert statuses and learn statuses remain up-to-date across your management interfaces.
+
+For more information, see:
+
+- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md)
+- [View alerts on your sensor](how-to-view-alerts.md)
+- [Manage alerts from the sensor console](how-to-manage-the-alert-event.md)
+- [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
+
+### Sensor connections restored after certificate rotation
+
+Starting in version 22.2.3, after rotating your certificates, your sensor connections are automatically restored to your central manager, and you don't need to reconnect them manually.
+
+For more information, see [About certificates](how-to-deploy-certificates.md).
+
+### Support diagnostic log enhancements (Public preview)
+
+Starting in sensor version [22.1.1](#new-support-diagnostics-log), you've been able to download a diagnostic log from the sensor console to send to support when you open a ticket.
+
+Now, for locally managed sensors, you can upload that diagnostic log directly on the Azure portal.
++
+> [!TIP]
+> For cloud-connected sensors, starting from sensor version [22.1.3](#march-2022), the diagnostic log is automatically available to support when you open the ticket.
+>
+For more information, see:
+
+- [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+- [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support-public-preview)
+
+### Improved security for uploading protocol plugins
+
+This version of the sensor provides an improved security for uploading proprietary plugins you've created using the Horizon SDK.
++
+For more information, see [Manage proprietary protocols with Horizon plugins](resources-manage-proprietary-protocols.md).
+
+### Sensor names shown in browser tabs
+
+Starting in sensor version 22.2.3, your sensor's name is displayed in the browser tab, making it easier for you to identify the sensors you're working with.
+
+For example:
++
+For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md).
+
+### Microsoft Sentinel incident synch with Defender for IoT alerts
+
+The **IoT OT Threat Monitoring with Defender for IoT** solution now ensures that alerts in Defender for IoT are updated with any related incident **Status** changes from Microsoft Sentinel.
+
+This synchronization overrides any status defined in Defender for IoT, in the Azure portal or the sensor console, so that the alert statuses match that of the related incident.
+
+Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use the latest synchronization support, including the new **AD4IoT-AutoAlertStatusSync** playbook. After updating the solution, make sure that you also take the [required steps](../../sentinel/iot-advanced-threat-monitoring.md?#update-alert-statuses-in-defender-for-iot) to ensure that the new playbook works as expected.
+
+For more information, see:
+
+- [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
+- [View alerts on your sensor](how-to-view-alerts.md)
+
+## June 2022
+
+- **Sensor software version 22.1.6**: Minor version with maintenance updates for internal sensor components
+
+- **Sensor software version 22.1.5**: Minor version to improve TI installation packages and software updates
+
+We've also recently optimized and enhanced our documentation as follows:
+
+- [Updated appliance catalog for OT environments](#updated-appliance-catalog-for-ot-environments)
+- [Documentation reorganization for end-user organizations](#documentation-reorganization-for-end-user-organizations)
++
+### Updated appliance catalog for OT environments
+
+We've refreshed and revamped the catalog of supported appliances for monitoring OT environments. These appliances support flexible deployment options for environments of all sizes and can be used to host both the OT monitoring sensor and on-premises management consoles.
+
+Use the new pages as follows:
+
+1. **Understand which hardware model best fits your organization's needs.** For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
+
+1. **Learn about the preconfigured hardware appliances that are available to purchase, or system requirements for virtual machines.** For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
+
+ For more information about each appliance type, use the linked reference page, or browse through our new **Reference > OT monitoring appliances** section.
+
+ :::image type="content" source="media/release-notes/appliance-catalog.png" alt-text="Screenshot of the new appliance catalog reference section." lightbox="media/release-notes/appliance-catalog.png":::
+
+ Reference articles for each appliance type, including virtual appliances, include specific steps to configure the appliance for OT monitoring with Defender for IoT. Generic software installation and troubleshooting procedures are still documented in [Defender for IoT software installation](how-to-install-software.md).
+
+### Documentation reorganization for end-user organizations
+
+We recently reorganized our Defender for IoT documentation for end-user organizations, highlighting a clearer path for onboarding and getting started.
+
+Check out our new structure to follow through viewing devices and assets, managing alerts, vulnerabilities and threats, integrating with other services, and deploying and maintaining your Defender for IoT system.
+
+**New and updated articles include**:
+
+- [Welcome to Microsoft Defender for IoT for organizations](overview.md)
+- [Microsoft Defender for IoT architecture](architecture.md)
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
+- [Tutorial: Microsoft Defender for IoT trial setup](tutorial-onboarding.md)
+- [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md)
+- [Plan your sensor connections for OT monitoring](best-practices/plan-network-monitoring.md)
+- [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md)
+
+> [!NOTE]
+> To send feedback on docs via GitHub, scroll to the bottom of the page and select the **Feedback** option for **This page**. We'd be glad to hear from you!
+>
++
+## April 2022
+
+- [Extended device property data in the Device inventory](#extended-device-property-data-in-the-device-inventory)
+
+### Extended device property data in the Device inventory
+
+**Sensor software version**: 22.1.4
+
+Starting for sensors updated to version 22.1.4, the **Device inventory** page on the Azure portal shows extended data for the following fields:
+
+- **Description**
+- **Tags**
+- **Protocols**
+- **Scanner**
+- **Last Activity**
+
+For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
+
+## March 2022
+
+**Sensor version**: 22.1.3
+
+- [Use Azure Monitor workbooks with Microsoft Defender for IoT](#use-azure-monitor-workbooks-with-microsoft-defender-for-iot-public-preview)
+- [IoT OT Threat Monitoring with Defender for IoT solution GA](#iot-ot-threat-monitoring-with-defender-for-iot-solution-ga)
+- [Edit and delete devices from the Azure portal](#edit-and-delete-devices-from-the-azure-portal-public-preview)
+- [Key state alert updates](#key-state-alert-updates-public-preview)
+- [Sign out of a CLI session](#sign-out-of-a-cli-session)
++
+### Use Azure Monitor workbooks with Microsoft Defender for IoT (Public preview)
+
+[Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md) provide graphs and dashboards that visually reflect your data, and are now available directly in Microsoft Defender for IoT with data from [Azure Resource Graph](../../governance/resource-graph/index.yml).
+
+In the Azure portal, use the new Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or create custom workbooks of your own.
++
+For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
+
+### IoT OT Threat Monitoring with Defender for IoT solution GA
+
+The IoT OT Threat Monitoring with Defender for IoT solution in Microsoft Sentinel is now GA. In the Azure portal, use this solution to help secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
+
+For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) and [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended).
+
+### Edit and delete devices from the Azure portal (Public preview)
+
+The **Device inventory** page in the Azure portal now supports the ability to edit device details, such as security, classification, location, and more:
++
+For more information, see [Edit device details](how-to-manage-device-inventory-for-organizations.md#edit-device-details).
+
+You can only delete devices from Defender for IoT if they've been inactive for more than 14 days. For more information, see [Delete a device](how-to-manage-device-inventory-for-organizations.md#delete-a-device).
+
+### Key state alert updates (Public preview)
+
+Defender for IoT now supports the Rockwell protocol for PLC operating mode detections.
+
+For the Rockwell protocol, the **Device inventory** pages in both the Azure portal and the sensor console now indicate the PLC operating mode key and run state, and whether the device is currently in a secure mode.
+
+If the device's PLC operating mode is ever switched to an unsecured mode, such as *Program* or *Remote*, a **PLC Operating Mode Changed** alert is generated.
+
+For more information, see [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md).
+
+### Sign out of a CLI session
+
+Starting in this version, CLI users are automatically signed out of their session after 300 inactive seconds. To sign out manually, use the new `logout` CLI command.
+
+For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
++
+## February 2022
+
+**Sensor software version**: 22.1.1
+
+- [New sensor installation wizard](#new-sensor-installation-wizard)
+- [Sensor redesign and unified Microsoft product experience](#sensor-redesign-and-unified-microsoft-product-experience)
+- [Enhanced sensor Overview page](#enhanced-sensor-overview-page)
+- [New support diagnostics log](#new-support-diagnostics-log)
+- [Alert updates](#alert-updates)
+- [Custom alert updates](#custom-alert-updates)
+- [CLI command updates](#cli-command-updates)
+- [Update to version 22.1.x](#update-to-version-221x)
+- [New connectivity model and firewall requirements](#new-connectivity-model-and-firewall-requirements)
+- [Protocol improvements](#protocol-improvements)
+- [Modified, replaced, or removed options and configurations](#modified-replaced-or-removed-options-and-configurations)
+
+### New sensor installation wizard
+
+Previously, you needed to use separate dialogs to upload a sensor activation file, verify your sensor network configuration, and configure your SSL/TLS certificates.
+
+Now, when installing a new sensor or a new sensor version, our installation wizard provides a streamlined interface to do all these tasks from a single location.
+
+For more information, see [Defender for IoT installation](how-to-install-software.md).
+
+### Sensor redesign and unified Microsoft product experience
+
+The Defender for IoT sensor console has been redesigned to create a unified Microsoft Azure experience and enhance and simplify workflows.
+
+These features are now Generally Available (GA). Updates include the general look and feel, drill-down panes, search and action options, and more. For example:
+
+**Simplified workflows include**:
+
+- The **Device inventory** page now includes detailed device pages. Select a device in the table and then select **View full details** on the right.
+
+ :::image type="content" source="media/release-notes/device-inventory-details.png" alt-text="Screenshot of the View full details button." lightbox="media/release-notes/device-inventory-details.png":::
+
+- Properties updated from the sensor's inventory are now automatically updated in the cloud device inventory.
+
+- The device details pages, accessed either from the **Device map** or **Device inventory** pages, is shown as read only. To modify device properties, select **Edit properties** on the bottom-left.
+
+- The **Data mining** page now includes reporting functionality. While the **Reports** page was removed, users with read-only access can view updates on the **Data mining page** without the ability to modify reports or settings.
+
+ For admin users creating new reports, you can now toggle on a **Send to CM** option to send the report to a central management console as well. For more information, see [Create a report](how-to-create-data-mining-queries.md#create-a-report).
+
+- The **System settings** area has been reorganized in to sections for *Basic* settings, settings for *Network monitoring*, *Sensor management*, *Integrations*, and *Import settings*.
+
+- The sensor online help now links to key articles in the Microsoft Defender for IoT documentation.
+
+**Defender for IoT maps now include**:
+
+- A new **Map View** is now shown for alerts and on the device details pages, showing where in your environment the alert or device is found.
+
+- Right-click a device on the map to view contextual information about the device, including related alerts, event timeline data, and connected devices.
+
+- To enable the ability to collapse IT networks, ensure that the **Toggle IT Networks Grouping** option is enabled. This option is now only available from the map.
+
+- The **Simplified Map View** option has been removed.
+
+We've also implemented global readiness and accessibility features to comply with Microsoft standards. In the on-premises sensor console, these updates include both high contrast and regular screen display themes and localization for over 15 languages.
+
+For example:
++
+Access global readiness and accessibility options from the **Settings** icon at the top-right corner of your screen:
++
+### Enhanced sensor Overview page
+
+The Defender for IoT sensor portal's **Dashboard** page has been renamed as **Overview**, and now includes data that better highlights system deployment details, critical network monitoring health, top alerts, and important trends and statistics.
++
+The Overview page also now serves as a *black box* to view your overall sensor status in case your outbound connections, such as to the Azure portal, go down.
+
+Create more dashboards using the **Trends & Statistics** page, located under the **Analyze** menu on the left.
+
+### New support diagnostics log
+
+Now you can get a summary of the log and system information that gets added to your support tickets. In the **Backup and Restore** dialog, select **Support Ticket Diagnostics**.
++
+For more information, see [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+
+### Alert updates
+
+**In the Azure portal**:
+
+Alerts are now available in Defender for IoT in the Azure portal. Work with alerts to enhance the security and operation of your IoT/OT network.
+
+The new **Alerts** page is currently in Public Preview, and provides:
+
+- An aggregated, real-time view of threats detected by network sensors.
+- Remediation steps for devices and network processes.
+- Streaming alerts to Microsoft Sentinel and empower your SOC team.
+- Alert storage for 90 days from the time they're first detected.
+- Tools to investigate source and destination activity, alert severity and status, MITRE ATT&CK information, and contextual information about the alert.
+
+For example:
++
+**On the sensor console**:
+
+On the sensor console, the **Alerts** page now shows details for alerts detected by sensors that are configured with a cloud-connection to Defender for IoT on Azure. Users working with alerts in both Azure and on-premises should understand how alerts are managed between the Azure portal and the on-premises components.
++
+Other alert updates include:
+
+- **Access contextual data** for each alert, such as events that occurred around the same time, or a map of connected devices. Maps of connected devices are available for sensor console alerts only.
+
+- **Alert statuses** are updated, and, for example, now include a *Closed* status instead of *Acknowledged*.
+
+- **Alert storage** for 90 days from the time that they're first detected.
+
+- The **Backup Activity with Antivirus Signatures Alert**. This new alert warning is triggered for traffic detected between a source device and destination backup server, which is often legitimate backup activity. Critical or major malware alerts are no longer triggered for such activity.
+
+- **During upgrades**, sensor console alerts that are currently archived are deleted. Pinned alerts are no longer supported, so pins are removed for sensor console alerts as relevant.
+
+For more information, see [View alerts on your sensor](how-to-view-alerts.md).
+
+### Custom alert updates
+
+The sensor console's **Custom alert rules** page now provides:
+
+- Hit count information in the **Custom alert rules** table, with at-a-glance details about the number of alerts triggered in the last week for each rule you've created.
+
+- The ability to schedule custom alert rules to run outside of regular working hours.
+
+- The ability to alert on any field that can be extracted from a protocol using the DPI engine.
+
+- Complete protocol support when creating custom rules, and support for an extensive range of related protocol variables.
+
+ :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png" alt-text="Screenshot of the updated Custom alerts dialog. "lightbox="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png":::
+
+For more information and the updated custom alert procedure, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
+
+### CLI command updates
+
+The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
+
+This *cyberx_host* user is available by default and connects to the host machine. If you need to, recover the password for the *cyberx_host* user from the **Sites and sensors** page in Defender for IoT.
+
+As part of the containerized sensor, the following CLI commands have been modified:
+
+|Legacy name |Replacement |
+|||
+|`cyberx-xsense-reconfigure-interfaces` |`sudo dpkg-reconfigure iot-sensor` |
+|`cyberx-xsense-reload-interfaces` | `sudo dpkg-reconfigure iot-sensor` |
+|`cyberx-xsense-reconfigure-hostname` | `sudo dpkg-reconfigure iot-sensor` |
+| `cyberx-xsense-system-remount-disks` |`sudo dpkg-reconfigure iot-sensor` |
+
+The `sudo cyberx-xsense-limit-interface-I eth0 -l value` CLI command was removed. This command was used to limit the interface bandwidth that the sensor uses for day-to-day procedures, and is no longer supported.
+
+For more information, see [Defender for IoT installation](how-to-install-software.md) and [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md).
+
+### Update to version 22.1.x
+
+To use all of Defender for IoT's latest features, make sure to update your sensor software versions to 22.1.x.
+
+If you're on a legacy version, you may need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and re-activate your sensor with a new activation file.
+
+After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
+
+For more information, see [Update OT system software](update-ot-software.md).
+
+> [!NOTE]
+> Upgrading to version 22.1.x is a large update, and you should expect the update process to require more time than previous updates.
+>
+
+### New connectivity model and firewall requirements
+
+Defender for IoT version 22.1.x supports a new set of sensor connection methods that provide simplified deployment, improved security, scalability, and flexible connectivity.
+
+In addition to [migration steps](connect-sensors.md#migration-for-existing-customers), this new connectivity model requires that you open a new firewall rule. For more information, see:
+
+- **New firewall requirements**: [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
+- **Architecture**: [Sensor connection methods](architecture-connections.md)
+- **Connection procedures**: [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
+
+### Protocol improvements
+
+This version of Defender for IoT provides improved support for:
+
+- Profinet DCP
+- Honeywell
+- Windows endpoint detection
+
+For more information, see [Microsoft Defender for IoT - supported IoT, OT, ICS, and SCADA protocols](concept-supported-protocols.md).
+### Modified, replaced, or removed options and configurations
+
+The following Defender for IoT options and configurations have been moved, removed, and/or replaced:
+
+- Reports previously found on the **Reports** page are now shown on the **Data Mining** page instead. You can also continue to view data mining information directly from the on-premises management console.
+
+- Changing a locally managed sensor name is now supported only by onboarding the sensor to the Azure portal again with the new name. Sensor names can no longer be changed directly from the sensor. For more information, see [Change the name of a sensor](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor).
++
+## December 2021
+
+**Sensor software version**: 10.5.4
+
+- [Enhanced integration with Microsoft Sentinel (Preview)](#enhanced-integration-with-microsoft-sentinel-preview)
+- [Apache Log4j vulnerability](#apache-log4j-vulnerability)
+- [Alerting](#alerting)
+
+### Enhanced integration with Microsoft Sentinel (Preview)
+
+The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
+
+For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+
+### Apache Log4j vulnerability
+
+Version 10.5.4 of Microsoft Defender for IoT mitigates the Apache Log4j vulnerability. For details, see [the security advisory update](https://techcommunity.microsoft.com/t5/microsoft-defender-for-iot/updated-15-dec-defender-for-iot-security-advisory-apache-log4j/m-p/3036844).
+
+### Alerting
+
+Version 10.5.4 of Microsoft Defender for IoT delivers important alert enhancements:
+
+- Alerts for certain minor events or edge-cases are now disabled.
+- For certain scenarios, similar alerts are minimized in a single alert message.
+
+These changes reduce alert volume and enable more efficient targeting and analysis of security and operational events.
+
+For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md).
+
+#### Alerts permanently disabled
+
+The alerts listed below are permanently disabled with version 10.5.4. Detection and monitoring are still supported for traffic associated with the alerts.
+
+**Policy engine alerts**
+
+- RPC Procedure Invocations
+- Unauthorized HTTP Server
+- Abnormal usage of MAC Addresses
+
+#### Alerts disabled by default
+
+The alerts listed below are disabled by default with version 10.5.4. You can re-enable the alerts from the Support page of the sensor console, if necessary.
+
+**Anomaly engine alert**
+- Abnormal Number of Parameters in HTTP Header
+- Abnormal HTTP Header Length
+- Illegal HTTP Header Content
+
+**Operational engine alerts**
+- HTTP Client Error
+- RPC Operation Failed
+
+**Policy engine alerts**
+
+Disabling these alerts also disables monitoring of related traffic. Specifically, this traffic won't be reported in Data Mining reports.
+
+- Illegal HTTP Communication alert and HTTP Connections Data Mining traffic
+- Unauthorized HTTP User Agent alert and HTTP User Agents Data Mining traffic
+- Unauthorized HTTP SOAP Action and HTTP SOAP Actions Data Mining traffic
+
+#### Updated alert functionality
+
+**Unauthorized Database Operation alert**
+Previously, this alert covered DDL and DML alerting and Data Mining reporting. Now:
+- DDL traffic: alerting and monitoring are supported.
+- DML traffic: Monitoring is supported. Alerting isn't supported.
+
+**New Asset Detected alert**
+This alert is disabled for new devices detected in IT subnets. The New Asset Detected alert is still triggered for new devices discovered in OT subnets. OT subnets are detected automatically and can be updated by users if necessary.
+
+### Minimized alerting
+
+Alert triggering for specific scenarios has been minimized to help reduce alert volume and simplify alert investigation. In these scenarios, if a device performs repeated activity on targets, an alert is triggered once. Previously, a new alert was triggered each time the same activity was carried out.
+
+This new functionality is available on the following alerts:
+
+- Port Scan Detected alerts, based on activity of the source device (generated by the Anomaly engine)
+- Malware alerts, based on activity of the source device. (generated by the Malware engine).
+- Suspicion of Denial of Service Attack alerts, based on activity of the destination device (generated by the Malware engine)
+
+## Next steps
+
+[Getting started with Defender for IoT](getting-started.md)
dns Dns Private Resolver Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-bicep.md
+
+ Title: 'Quickstart: Create an Azure DNS Private Resolver - Bicep'
+
+description: Learn how to create Azure DNS Private Resolver. This article is a step-by-step quickstart to create and manage your first Azure DNS Private Resolver using Bicep.
+++ Last updated : 10/07/2022+++
+#Customer intent: As an administrator or developer, I want to learn how to create Azure DNS Private Resolver using Bicep so I can use Azure DNS Private Resolver as forwarder.
++
+# Quickstart: Create an Azure DNS Private Resolver using Bicep
+
+This quickstart describes how to use Bicep to create Azure DNS Private Resolver.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-dns-private-resolver).
+
+This Bicep file is configured to create a:
+
+- Virtual network
+- DNS resolver
+- Inbound & outbound endpoints
+- Forwarding Rules & rulesets.
++
+Seven resources have been defined in this template:
+
+- [**Microsoft.Network/virtualnetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Network/dnsResolvers**](/azure/templates/microsoft.network/dnsresolvers)
+- [**Microsoft.Network/dnsResolvers/inboundEndpoints**](/azure/templates/microsoft.network/dnsresolvers/inboundendpoints)
+- [**Microsoft.Network/dnsResolvers/outboundEndpoints**](/azure/templates/microsoft.network/dnsresolvers/outboundendpoints)
+- [**Microsoft.Network/dnsForwardingRulesets**](/azure/templates/microsoft.network/dnsforwardingrulesets)
+- [**Microsoft.Network/dnsForwardingRulesets/forwardingRules**](/azure/templates/microsoft.network/dnsforwardingrulesets/forwardingrules)
+- [**Microsoft.Network/dnsForwardingRulesets/virtualNetworkLinks**](/azure/templates/microsoft.network/dnsforwardingrulesets/virtualnetworklinks)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+2. Deploy the Bicep file using either Azure CLI or Azure PowerShell
+
+# [CLI](#tab/CLI)
+
+````azurecli
+az group create --name exampleRG --location eastus
+az deployment group create --resource-group exampleRG --template-file main.bicep
+````
+
+# [PowerShell](#tab/PowerShell)
+
+````azurepowershell
+New-AzResourceGroup -Name exampleRG -Location eastus
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+````
+++
+When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+#Show the DNS resolver
+az dns-resolver show --name "sampleDnsResolver" --resource-group "sampleResourceGroup"
+
+#List the inbound endpoint
+az dns-resolver inbound-endpoint list --dns-resolver-name "sampleDnsResolver" --resource-group "sampleResourceGroup"
+
+#List the outbound endpoint
+az dns-resolver outbound-endpoint list --dns-resolver-name "sampleDnsResolver" --resource-group "sampleResourceGroup"
+
+```
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+#Show the DNS resolver
+Get-AzDnsResolver -Name "sampleDNSResolver" -ResourceGroupName "sampleResourceGroup"
+
+#List the inbound endpoint list
+Get-AzDnsResolverInboundEndpoint -DnsResolverName "sampleDnsResolver" -ResourceGroupName "sampleResourceGroup"
+
+#List the outbound endpoint
+Get-AzDnsResolverOutboundEndpoint -DnsResolverName "sampleDnsResolver" -ResourceGroupName "sampleResourceGroup"
+
+```
++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resources in the following order.
+
+### Delete the DNS resolver
+
+# [CLI](#tab/CLI)
+````azurecli
+#Delete the inbound endpoint
+az dns-resolver inbound-endpoint delete --dns-resolver-name "sampleDnsResolver" --name "sampleInboundEndpoint" --resource-group "exampleRG"
+
+#Delete the virtual network link
+az dns-resolver vnet-link delete --ruleset-name "sampleDnsForwardingRuleset" --resource- group "exampleRG" --name "sampleVirtualNetworkLink"
+
+#Delete DNS forwarding ruleset
+az dns-resolver forwarding-ruleset delete --name "samplednsForwardingRulesetName" --resource-group "exampleRG"
+
+#Delete the outbound endpoint
+az dns-resolver outbound-endpoint delete --dns-resolver-name "sampleDnsResolver" --name "sampleOutboundEndpoint" --resource-group "exampleRG"
+
+#Delete the DNS resolver
+az dns-resolver delete --name "sampleDnsResolver" --resource-group "exampleRG"
+````
+
+# [PowerShell](#tab/PowerShell)
+```azurepowershell
+#Delete the inbound endpoint
+Remove-AzDnsResolverInboundEndpoint -Name myinboundendpoint -DnsResolverName mydnsresolver -ResourceGroupName myresourcegroup
+
+#Delete the virtual network link
+Remove-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -Name vnetlink -ResourceGroupName myresourcegroup
+
+#Delete the DNS forwarding ruleset
+Remove-AzDnsForwardingRuleset -Name $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup
+
+#Delete the outbound endpoint
+Remove-AzDnsResolverOutboundEndpoint -DnsResolverName mydnsresolver -ResourceGroupName myresourcegroup -Name myoutboundendpoint
+
+#Delete the DNS resolver
+Remove-AzDnsResolver -Name mydnsresolver -ResourceGroupName myresourcegroup
+````
++
+## Next steps
+
+In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains
+- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
dns Dns Private Resolver Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-template.md
+
+ Title: 'Quickstart: Create an Azure DNS Private Resolver - Azure Resource Manager template (ARM template)'
+
+description: Learn how to create Azure DNS Private Resolver. This article is a step-by-step quickstart to create and manage your first Azure DNS Private Resolver using Azure Resource Manager template (ARM template).
+++ Last updated : 10/07/2022+++
+#Customer intent: As an administrator or developer, I want to learn how to create Azure DNS Private Resolver using ARM template so I can use Azure DNS Private Resolver as forwarder..
++
+# Quickstart: Create an Azure DNS Private Resolver using an ARM template
+
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create Azure DNS Private Resolver.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fazure-dns-private-resolver%2Fazuredeploy.json)
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-dns-private-resolver).
+
+This template is configured to create a:
+
+- Virtual network
+- DNS resolver
+- Inbound & outbound endpoints
+- Forwarding Rules & rulesets.
++
+Seven resources have been defined in this template:
+
+- [**Microsoft.Network/virtualnetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Network/dnsResolvers**](/azure/templates/microsoft.network/dnsresolvers)
+- [**Microsoft.Network/dnsResolvers/inboundEndpoints**](/azure/templates/microsoft.network/dnsresolvers/inboundendpoints)
+- [**Microsoft.Network/dnsResolvers/outboundEndpoints**](/azure/templates/microsoft.network/dnsresolvers/outboundendpoints)
+- [**Microsoft.Network/dnsForwardingRulesets**](/azure/templates/microsoft.network/dnsforwardingrulesets)
+- [**Microsoft.Network/dnsForwardingRulesets/forwardingRules**](/azure/templates/microsoft.network/dnsforwardingrulesets/forwardingrules)
+- [**Microsoft.Network/dnsForwardingRulesets/virtualNetworkLinks**](/azure/templates/microsoft.network/dnsforwardingrulesets/virtualnetworklinks)
++
+## Deploy the template
+
+# [CLI](#tab/CLI)
+
+````azurecli-interactive
+read -p "Enter the location: " location
+resourceGroupName="exampleRG"
+templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/azure-dns-private-resolver/azuredeploy.json"
+
+az group create \
+--name $resourceGroupName \
+--locataion $location
+
+az deployment group create \
+--resource-group $resourceGroupName \
+--template-uri $templateUri
+````
+
+# [PowerShell](#tab/PowerShell)
+````azurepowershell-interactive
+$location = Read-Host -Prompt "Enter the location: "
+$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/azure-dns-private-resolver/azuredeploy.json"
+
+$resourceGroupName = "exampleRG"
+
+New-AzResourceGroup -Name $resourceGroupName -Location $location
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri
+````
++
+## Validate the deployment
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+
+1. Select **Resource groups** from the left pane.
+
+1. Select the resource group that you created in the previous section.
+
+1. The resource group should contain the following resources:
+
+ [ ![DNS resolver resource group](./media/dns-resolver-getstarted-template/dns-resolver-resource-group.png)](./media/dns-resolver-getstarted-template/dns-resolver-resource-group.png#lightbox)
+
+1. Select the DNS private resolver service to verify the provisioning and current state.
+
+ [ ![DNS resolver page](./media/dns-resolver-getstarted-template/resolver-page.png)](./media/dns-resolver-getstarted-template/resolver-page.png#lightbox)
+
+1. Select the Inbound Endpoints and Outbound Endpoints to verify that the endpoints are created and the outbound endpoint is associated with the forwarding ruleset.
+
+ [ ![DNS resolver inbound endpoint](./media/dns-resolver-getstarted-template/resolver-inbound-endpoint.png)](./media/dns-resolver-getstarted-template/resolver-inbound-endpoint.png#lightbox)
+
+ [ ![DNS resolver outbound endpoint](./media/dns-resolver-getstarted-template/resolver-outbound-endpoint.png)](./media/dns-resolver-getstarted-template/resolver-outbound-endpoint.png#lightbox)
+
+1. Select the **Associated ruleset** from the outbound endpoint page to verify the forwarding ruleset and rules creation.
+
+ [ ![DNS resolver forwarding rule](./media/dns-resolver-getstarted-template/resolver-forwarding-rule.png)](./media/dns-resolver-getstarted-template/resolver-forwarding-rule.png#lightbox)
+
+1. Verify the resolver Virtual network is linked with forwarding ruleset.
+
+ [ ![DNS resolver VNet link](./media/dns-resolver-getstarted-template/resolver-vnet-link.png)](./media/dns-resolver-getstarted-template/resolver-vnet-link.png#lightbox)
+
+## Next steps
+
+In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains
+- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
event-grid Auth0 Log Stream Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-log-stream-blob-storage.md
This article shows you how to send Auth0 events to Azure Blob Storage via Azure
1. Select the container and verify that your Auth0 logs are being stored. > [!NOTE]
- > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor's Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
+ > You can use steps in the article to handle events from other event sources too. For a generic example of sending Event Grid events to Azure Blob Storage or Azure Monitor Application Insights, see [this example on GitHub](https://github.com/awkwardindustries/azure-monitor-handler).
## Next steps - [Auth0 Partner Topic](auth0-overview.md) - [Subscribe to Auth0 events](auth0-how-to.md)-- [Send Auth0 events to Azure Blob Storage](auth0-log-stream-blob-storage.md)
+- [Send Auth0 events to Azure Blob Storage](auth0-log-stream-blob-storage.md)
hdinsight Apache Spark Analyze Application Insight Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-analyze-application-insight-logs.md
For information on adding storage to an existing cluster, see the [Add additiona
### Data schema
-Application Insights provides [export data model](../../azure-monitor/app/export-data-model.md) information for the telemetry data format exported to blobs. The steps in this document use Spark SQL to work with the data. Spark SQL can automatically generate a schema for the JSON data structure logged by Application Insights.
+Application Insights provides [export data model](../../azure-monitor/app/export-telemetry.md#application-insights-export-data-model) information for the telemetry data format exported to blobs. The steps in this document use Spark SQL to work with the data. Spark SQL can automatically generate a schema for the JSON data structure logged by Application Insights.
## Export telemetry data
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
Last updated 06/03/2022
# SMART on FHIR overview
-[SMART on FHIR](https://docs.smarthealthit.org/) is a set of open specifications to integrate partner applications with FHIR servers and electronic medical records systems that have Fast Healthcare Interoperability Resources (FHIR&#174;) interfaces. One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence.
+Substitutable Medical Applications and Reusable Technologies([SMART on FHIR](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
+- Applications have a known method for obtaining authentication/authorization to a FHIR repository.
+- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository.
+- Users have the ability to grant applications access to a further limited set of their data by using SMART clinical scopes.
-Authentication is based on OAuth2. But because SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
+<!SMART Implementation Guide v1.0.0 is supported by Azure Health Data Services and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services.
-Below tutorial describes how to use the proxy to enable SMART on FHIR applications with Azure API for FHIR.
+Sample demonstrates and list steps that can be referenced to pass ONC G(10) with Inferno test suite.
-## Tutorial: SMART on FHIR proxy
-**Prerequisites**
+>
-- An instance of the Azure API for FHIR-- [.NET Core 2.2](https://dotnet.microsoft.com/download/dotnet-core/2.2)
+One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
-## Configure Azure AD registrations
+Below tutorial describes steps to enable SMART on FHIR applications with FHIR Service.
-SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the Azure API for FHIR uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
+## Prerequisites
+
+- An instance of the FHIR Service
+- .NET SDK 6.0
+- [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)
+- [Register public client application in Azure AD](https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app)
+ - After registering the application, make note of the applicationId for client application.
-You'll also need a client application registration. Most SMART on FHIR applications are single-page JavaScript applications. So you should follow the instructions for configuring a [public client application in Azure AD](register-public-azure-ad-client-app.md).
+<! Tutorial : To enable SMART on FHIR using APIM, follow below steps
+As a pre-requisite , ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-After you complete these steps, you should have:
+Step 1 : Set up FHIR SMART user role
+Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
-- A FHIR server with the audience set to `https://MYFHIRAPI.azurehealthcareapis.com`, where `MYFHIRAPI` is the name of your Azure API for FHIR instance.-- A public client application registration. Make a note of the application ID for this client application.
+Step 2 : [Follow the steps](https://github.com/microsoft/fhir-server/tree/feature/smart-onc-g10-sample/samples/smart) for setting up the FHIR server integrated with APIM in production. >
-### Set admin consent for your app
+Lets go over individual steps to enable SMART on FHIR
+## Step 1 : Set admin consent for your client application
To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
If you do have administrative privileges, complete the following steps to grant
To add yourself or another user as owner of an app: 1. In the Azure portal, go to Azure Active Directory.
-1. In the left menu, select **App Registration**.
-1. Search for the app registration you created, and then select it.
-1. In the left menu, under **Manage**, select **Owners**.
-1. Select **Add owners**, and then add yourself or the user you want to have admin consent.
-1. Select **Save**.
-
-## Enable the SMART on FHIR proxy
-
-Enable the SMART on FHIR proxy in the **Authentication** settings for your Azure API for FHIR instance by selecting the **SMART on FHIR proxy** check box:
+2. In the left menu, select **App Registration**.
+3. Search for the app registration you created, and then select it.
+4. In the left menu, under **Manage**, select **Owners**.
+5. Select **Add owners**, and then add yourself or the user you want to have admin consent.
+6. Select **Save**
-![Selections for enabling the SMART on FHIR proxy](media/tutorial-smart-on-fhir/enable-smart-on-fhir-proxy.png)
-
-## Enable CORS
+## Step 2: Enable the SMART on FHIR proxy
-Because most SMART on FHIR applications are single-page JavaScript apps, you need to [enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md) for the Azure API for FHIR:
+SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the Azure API for FHIR uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-![Selections for enabling CORS](media/tutorial-smart-on-fhir/enable-cors.png)
+To enable the SMART on FHIR proxy in the **Authentication** settings for your Azure API for FHIR instance, select the **SMART on FHIR proxy** check box:
-## Configure the reply URL
+![Selections for enabling the SMART on FHIR proxy](media/tutorial-smart-on-fhir/enable-smart-on-fhir-proxy.png)
The SMART on FHIR proxy acts as an intermediary between the SMART on FHIR app and Azure AD. The authentication reply (the authentication code) must go to the SMART on FHIR proxy instead of the app itself. The proxy then forwards the reply to the app.
Add the reply URL to the public client application that you created earlier for
![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)
-## Get a test patient
+## Step 3: Get a test patient
To test the Azure API for FHIR and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
-## Download the SMART on FHIR app launcher
+## Step 4: Download the SMART on FHIR app launcher
The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
Use this command to run the application:
dotnet run ```
-## Test the SMART on FHIR proxy
+## Step 5: Test the SMART on FHIR proxy
After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Last updated 11/10/2022
# SMART on FHIR
-Substitutable Medical Applications and Reusable Technologies [SMART on FHIR](https://docs.smarthealthit.org/) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
-- Applications have a known method for obtaining authentication/authorization to a FHIR repository-- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository-- Users have the ability to grant applications access to a further limited set of their data by using SMART clinical scopes.
+Substitutable Medical Applications and Reusable Technologies([SMART on FHIR](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
+- Applications have a known method for obtaining authentication/authorization to a FHIR repository.
+- Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository.
+- Users have the ability to grant applications access to a limited set of their data by using SMART clinical scopes.
<!SMART Implementation Guide v1.0.0 is supported by Azure Health Data Services and Azure API Management (APIM). This is our recommended approach, as it enabled Health IT developers to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services. Sample demonstrates and list steps that can be referenced to pass ONC G(10) with Inferno test suite. >-
-One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure Health Data Services (FHIR Service) has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
+One of the main purposes of the specification is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD). Azure Health Data Services (FHIR Service) has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
Below tutorial describes steps to enable SMART on FHIR applications with FHIR Service.
Below tutorial describes steps to enable SMART on FHIR applications with FHIR Se
- An instance of the FHIR Service - .NET SDK 6.0 - [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)-- [Register public client application in Azure AD]([https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app]
+- [Register public client application in Azure AD](https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app)
- After registering the application, make note of the applicationId for client application. <! Tutorial : To enable SMART on FHIR using APIM, follow below steps
+As a pre-requisite , ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
+ Step 1 : Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
-Step 2 : Deploy the necessary components to set up the FHIR server integrated with APIM in production. Follow ReadMe
-Step 3 : Load US Core profiles
-Step 4 : Create Azure AD custom policy using this README >
+Step 2 : [Follow the steps](https://github.com/microsoft/fhir-server/tree/feature/smart-onc-g10-sample/samples/smart) for setting up the FHIR server integrated with APIM in production. >
Lets go over individual steps to enable SMART on FHIR ## Step 1 : Set admin consent for your client application
To add yourself or another user as owner of an app:
5. Select **Add owners**, and then add yourself or the user you want to have admin consent. 6. Select **Save**
+## Step 2: Enable the SMART on FHIR proxy
-## Step 2 : Configure Azure AD registrations
-
-SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the FHIR service uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.fhir.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-## Step 3: Enable the SMART on FHIR proxy
+SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the FHIR service uses an `Audience` value of `https://fhir.azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.fhir.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-Enable the SMART on FHIR proxy in the **Authentication** settings for your FHIR instance by selecting the **SMART on FHIR proxy** check box.
+To enable the SMART on FHIR proxy in the **Authentication** settings for your FHIR instance, select the **SMART on FHIR proxy** check box.
The SMART on FHIR proxy acts as an intermediary between the SMART on FHIR app and Azure AD. The authentication reply (the authentication code) must go to the SMART on FHIR proxy instead of the app itself. The proxy then forwards the reply to the app.
You can generate the combined reply URL by using a script like this:
```PowerShell $replyUrl = "https://localhost:5001/sampleapp/https://docsupdatetracker.net/index.html"
-$fhirServerUrl = "https://MYFHIRAPI.azurewebsites.net"
+$fhirServerUrl = "https://MYFHIRAPI.fhir.azurewebsites.net"
$bytes = [System.Text.Encoding]::UTF8.GetBytes($ReplyUrl) $encodedText = [Convert]::ToBase64String($bytes) $encodedText = $encodedText.TrimEnd('=');
Add the reply URL to the public client application that you created earlier for
<!![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)>
-## Step 4 : Get a test patient
+## Step 3 : Get a test patient
To test the FHIR service and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
-## Step 5 : Download the SMART on FHIR app launcher
+## Step 4 : Download the SMART on FHIR app launcher
The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
Use this command to run the application:
dotnet run ```
-## Step 6 : Test the SMART on FHIR proxy
+## Step 5 : Test the SMART on FHIR proxy
After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
healthcare-apis Deploy 02 New Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-02-new-button.md
Previously updated : 11/18/2022 Last updated : 11/22/2022 # Quickstart: Deploy MedTech service with an Azure Resource Manager template
-In this article, you'll learn how to deploy MedTech service in the Azure portal using an Azure Resource Manager (ARM) template. This ARM template will be used with the **Deploy to Azure** button to make it easy to provide the information you need to automatically create the infrastructure and configuration of your deployment. For more information about Azure Resource Manager (ARM) templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md).
+In this article, you'll learn how to deploy MedTech service in the Azure portal using an Azure Resource Manager (ARM) template. This ARM template will be used with the **Deploy to Azure** button to make it easy to provide the information you need to automatically create the infrastructure and configuration of your deployment.
-The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
+For more information about ARM templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md).
+
+The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
If you need to see a diagram with information on the MedTech service deployment, there's an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resources (FHIR&#174;) Observation.
healthcare-apis Deploy 08 New Ps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-08-new-ps-cli.md
Previously updated : 11/18/2022 Last updated : 11/22/2022 # Quickstart: Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates
-In this article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager (ARM) template. When you call the template from PowerShell or CLI, it provides automation that enables you to distribute your deployment to large numbers of developers. Using PowerShell or CLI allows for modifiable automation capabilities that will speed up your deployment configuration in enterprise environments. For more information about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md).
+In this article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager (ARM) template. When you call the template from PowerShell or CLI, it provides automation that enables you to distribute your deployment to large numbers of developers. Using PowerShell or CLI allows for modifiable automation capabilities that will speed up your deployment configuration in enterprise environments.
-The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
+For more information about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md).
-## Resources provided by the ARM template
+The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/).
+
+## Resources provided by the Azure Resource Manager template
The ARM template will help you automatically configure and deploy the following resources. Each one can be modified to meet your deployment requirements.
Before you can begin, you need to have the following prerequisites if you're usi
- Use [Azure CLI](/cli/azure/install-azure-cli).
-## Deploy MedTech service with the ARM template and Azure PowerShell
+## Deploy MedTech service with the Azure Resource Manager template and Azure PowerShell
Complete the following five steps to deploy the MedTech service using Azure PowerShell:
Complete the following five steps to deploy the MedTech service using Azure Powe
> [!NOTE] > If you want to run the Azure PowerShell commands locally, first enter `Connect-AzAccount` into your PowerShell command-line shell and enter your Azure credentials.
-## Deploy MedTech service with the ARM template and Azure CLI
+## Deploy MedTech service with the Azure Resource Manager template and Azure CLI
Complete the following five steps to deploy the MedTech service using Azure CLI:
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
Downstream devices connect to a module in the gateway that provides IoT Central
The IoT Edge device is provisioned in IoT Central along with the downstream devices connected to the IoT Edge device. Currently, IoT Central doesn't have runtime support for a gateway to provide an identity and to provision downstream devices. If you bring your own identity translation module, IoT Central can support this pattern.
-The [Azure IoT Central gateway module for Azure Video Analyzer](https://github.com/iot-for-all/iotc-ava-gateway/blob/main/README.md) on GitHub uses this pattern.
- ### Downstream device relationships with a gateway and modules If the downstream devices connect to an IoT Edge gateway device through the *IoT Edge hub* module, the IoT Edge device is a transparent gateway:
iot-central Howto Manage Dashboards With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards-with-rest-api.md
The IoT Central REST API lets you:
Use the following request to create a dashboard. ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+PUT https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
``` `dashboardId` - A unique [DTMI](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#digital-twin-model-identifier) identifier for the dashboard.
The response to this request looks like the following example:
Use the following request to retrieve the details of a dashboard by using a dashboard ID. ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
## Update a dashboard ```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+PATCH https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
``` The following example shows a request body that updates the display name of a dashboard and size of the tile:
The response to this request looks like the following example:
Use the following request to delete a dashboard by using the dashboard ID: ```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-06-30-preview
+DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
``` ## List dashboards
DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboar
Use the following request to retrieve a list of dashboards from your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/dashboards?api-version=2022-06-30-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/dashboards?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
Each data export definition can send data to one or more destinations. Create th
Use the following request to create or update a destination definition: ```http
-PUT https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+PUT https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` * destinationId - Unique ID for the destination.
The response to this request looks like the following example:
Use the following request to retrieve details of a destination from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a list of destinations from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/destinations?api-version=2022-06-30-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/destinations?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Patch a destination ```http
-PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` You can use this to perform an incremental update to an export. The sample request body looks like the following example that updates the `displayName` to a destination:
The response to this request looks like the following example:
Use the following request to delete a destination: ```http
-DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` ### Create or update an export definition
DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destination
Use the following request to create or update a data export definition: ```http
-PUT https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-06-30-preview
+PUT https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-10-31-preview
``` The following example shows a request body that creates an export definition for device telemetry:
The response to this request looks like the following example:
Use the following request to retrieve details of an export definition from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-06-30-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a list of export definitions from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/exports?api-version=2022-06-30-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/exports?api-version=2022-10-31-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Patch an export definition ```http
-PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=2022-06-30-preview
+PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=2022-10-31-preview
``` You can use this to perform an incremental update to an export. The sample request body looks like the following example that updates the `enrichments` to an export:
The response to this request looks like the following example:
Use the following request to delete an export definition: ```http
-DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
+DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-10-31-preview
``` ## Next steps
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
To learn how to query devices by using the IoT Central UI, see [How to use data
Use the following request to run a query: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/query?api-version=2022-06-30-preview
+POST https://{your app subdomain}.azureiotcentral.com/api/query?api-version=2022-10-31-preview
``` The query is in the request body and looks like the following example:
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
To configure the device bridge to transform the exported device data:
1. Select **Go &rarr;** to open the **App Service Editor** page. Make the following changes:
- 1. Open the *wwwroot/IoTCIntegration/index.js* file. Replace all the code in this file with the code in [index.js](https://raw.githubusercontent.com/iot-for-all/iot-central-compute/main/Azure_function/index.js).
+ 1. Open the *wwwroot/IoTCIntegration/index.js* file. Replace all the code in this file with the code in [index.js](https://raw.githubusercontent.com/Azure/iot-central-compute/main/Azure_function/index.js).
1. In the new *index.js*, update the `openWeatherAppId` variable file with Open Weather API key you obtained previously.
To configure the device bridge to transform the exported device data:
message.properties.add('computed', true); ```
- For reference, you can view a completed example of the [engine.js](https://raw.githubusercontent.com/iot-for-all/iot-central-compute/main/Azure_function/lib/engine.js) file.
+ For reference, you can view a completed example of the [engine.js](https://raw.githubusercontent.com/Azure/iot-central-compute/main/Azure_function/lib/engine.js) file.
1. In the **App Service Editor**, select **Console** in the left navigation. Run the following commands to install the required packages:
To configure the device bridge to transform the exported device data:
This section describes how to set up the Azure IoT Central application.
-First, save the [device model](https://raw.githubusercontent.com/iot-for-all/iot-central-compute/main/model.json) file to your local machine.
+First, save the [device model](https://raw.githubusercontent.com/Azure/iot-central-compute/main/model.json) file to your local machine.
To add a device template to your IoT Central application, navigate to your IoT Central application and then:
To run a sample device that tests the scenario:
1. Clone the GitHub repository that contains the sample code, run the following command: ```bash
- git clone https://github.com/iot-for-all/iot-central-compute
+ git clone https://github.com/Azure/iot-central-compute
``` 1. To connect the sample device to your IoT Central application, edit the connection settings in the *iot-central-compute/device/device.js* file. Replace the scope ID and group SAS key with the values you made a note of previously:
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Scenarios that process IoT data outside of IoT Central to extract business value
For example, use the IoT Central continuous data export feature to continuously ingest your IoT data into an Azure Synapse store. Then use Azure Data Factory to bring data from external systems into the Azure Synapse store. Use the Azure Synapse store with Power BI to generate your business reports.
-To learn more, see [Transform data for IoT Central](howto-transform-data.md). For a complete, end-to-end sample, see the [IoT Central Compute](https://github.com/iot-for-all/iot-central-compute) GitHub repository.
+To learn more, see [Transform data for IoT Central](howto-transform-data.md). For a complete, end-to-end sample, see the [IoT Central Compute](https://github.com/Azure/iot-central-compute) GitHub repository.
## Integrate with other services
You can use the data export and rules capabilities in IoT Central to integrate w
- [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](howto-create-custom-rules.md) - [Extend Azure IoT Central with custom analytics using Azure Databricks](howto-create-custom-analytics.md)
-You can use IoT Edge devices connected to your IoT Central application to integrate with [Azure Video Analyzer](../../azure-video-analyzer/video-analyzer-docs/overview.md). To learn more, see the [Azure IoT Central gateway module for Azure Video Analyzer](https://github.com/iot-for-all/iotc-ava-gateway/blob/main/README.md) on GitHub.
+You can use IoT Edge devices connected to your IoT Central application to integrate with [Azure Video Analyzer](../../azure-video-analyzer/video-analyzer-docs/overview.md).
## Integrate with companion applications
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
In this section, you'll prepare a development environment used to build the [Azu
1. Open a Git CMD or Git Bash command-line environment.
-2. Clone the [azure-utpm-c](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+2. Clone the [azure-utpm-c](https://github.com/Azure/azure-utpm-c) GitHub repository using the following command:
```cmd/sh git clone https://github.com/Azure/azure-utpm-c.git --recursive
iot-hub Iot Hub Compare Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-compare-event-hubs.md
Title: Compare Azure IoT Hub to Azure Event Hubs | Microsoft Docs description: A comparison of the IoT Hub and Event Hubs Azure services highlighting functional differences and use cases. The comparison includes supported protocols, device management, monitoring, and file uploads. - Previously updated : 02/20/2019 Last updated : 11/21/2022 # Connecting IoT Devices to Azure: IoT Hub and Event Hubs
-Azure provides services specifically developed for diverse types of connectivity and communication to help you connect your data to the power of the cloud. Both Azure IoT Hub and Azure Event Hubs are cloud services that can ingest large amounts of data and process or store that data for business insights. The two services are similar in that they both support ingestion of data with low latency and high reliability, but they are designed for different purposes. IoT Hub was developed to address the unique requirements of connecting IoT devices to the Azure cloud while Event Hubs was designed for big data streaming. Microsoft recommends using Azure IoT Hub to connect IoT devices to Azure
+Azure provides services developed for diverse types of connectivity and communication to help you connect your data to the power of the cloud. Both Azure IoT Hub and Azure Event Hubs are cloud services that can ingest large amounts of data and process or store that data for business insights. The two services are similar in that they both support ingestion of data with low latency and high reliability, but they're designed for different purposes. IoT Hub was developed to address the unique requirements of connecting IoT devices to the Azure cloud while Event Hubs was designed for big data streaming. Microsoft recommends using Azure IoT Hub to connect IoT devices to Azure
Azure IoT Hub is the cloud gateway that connects IoT devices to gather data and drive business insights and automation. In addition, IoT Hub includes features that enrich the relationship between your devices and your backend systems. Bi-directional communication capabilities mean that while you receive data from devices you can also send commands and policies back to devices. For example, use cloud-to-device messaging to update properties or invoke device management actions. Cloud-to-device communication also enables you to send cloud intelligence to your edge devices with Azure IoT Edge. The unique device-level identity provided by IoT Hub helps better secure your IoT solution from potential attacks.
-[Azure Event Hubs](../event-hubs/event-hubs-about.md) is the big data streaming service of Azure. It is designed for high throughput data streaming scenarios where customers may send billions of requests per day. Event Hubs uses a partitioned consumer model to scale out your stream and is integrated into the big data and analytics services of Azure including Databricks, Stream Analytics, ADLS, and HDInsight. With features like Event Hubs Capture and Auto-Inflate, this service is designed to support your big data apps and solutions. Additionally, IoT Hub uses Event Hubs for its telemetry flow path, so your IoT solution also benefits from the tremendous power of Event Hubs.
+[Azure Event Hubs](../event-hubs/event-hubs-about.md) is the big data streaming service of Azure. It's designed for high throughput data streaming scenarios where customers may send billions of requests per day, and uses a partitioned consumer model to scale out your stream. Event Hubs is integrated into the big data and analytics services of Azure, including Databricks, Stream Analytics, ADLS, and HDInsight. With features like Event Hubs Capture and Auto-Inflate, this service is designed to support your big data apps and solutions. Additionally, IoT Hub uses Event Hubs for its telemetry flow path, so your IoT solution also benefits from the tremendous power of Event Hubs.
-To summarize, both solutions are designed for data ingestion at a massive scale. Only IoT Hub provides the rich IoT-specific capabilities that are designed for you to maximize the business value of connecting your IoT devices to the Azure cloud. If your IoT journey is just beginning, starting with IoT Hub to support your data ingestion scenarios will assure that you have instant access to the full-featured IoT capabilities once your business and technical needs require them.
+To summarize, both solutions are designed for data ingestion at a massive scale. Only IoT Hub provides the rich IoT-specific capabilities that are designed for you to maximize the business value of connecting your IoT devices to the Azure cloud. If your IoT journey is just beginning, starting with IoT Hub to support your data ingestion scenarios assures that you'll have instant access to full-featured IoT capabilities, once your business and technical needs require them.
-The following table provides details about how the two tiers of IoT Hub compare to Event Hubs when you're evaluating them for IoT capabilities. For more information about the standard and basic tiers of IoT Hub, see [How to choose the right IoT Hub tier](iot-hub-scaling.md).
+The following table provides details about how the two tiers of IoT Hub compare to Event Hubs when you're evaluating them for IoT capabilities. For more information about the standard and basic tiers of IoT Hub, see [Choose the right IoT Hub tier for your solution](iot-hub-scaling.md).
-| IoT Capability | IoT Hub standard tier | IoT Hub basic tier | Event Hubs |
+| IoT capability | IoT Hub standard tier | IoT Hub basic tier | Event Hubs |
| | | | | | Device-to-cloud messaging | ![Check][checkmark] | ![Check][checkmark] | ![Check][checkmark] |
-| Protocols: HTTPS, AMQP, AMQP over webSockets | ![Check][checkmark] | ![Check][checkmark] | ![Check][checkmark] |
-| Protocols: MQTT, MQTT over webSockets | ![Check][checkmark] | ![Check][checkmark] | |
+| Protocols: HTTPS, AMQP, AMQP over WebSockets | ![Check][checkmark] | ![Check][checkmark] | ![Check][checkmark] |
+| Protocols: MQTT, MQTT over WebSockets | ![Check][checkmark] | ![Check][checkmark] | |
| Per-device identity | ![Check][checkmark] | ![Check][checkmark] | | | File upload from devices | ![Check][checkmark] | ![Check][checkmark] | | | Device Provisioning Service | ![Check][checkmark] | ![Check][checkmark] | |
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
Previously updated : 05/14/2021 Last updated : 11/21/2022
Message routing enables you to send messages from your devices to cloud services in an automated, scalable, and reliable manner. Message routing can be used for:
-* **Sending device telemetry messages as well as events** namely, device lifecycle events, device twin change events, digital twin change events, and device connection state events to the built-in-endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints). To learn more about the events sent from IoT Plug and Play devices, see [Understand IoT Plug and Play digital twins](../iot-develop/concepts-digital-twin.md).
+* **Sending device telemetry messages as well as events** namely, device lifecycle events, device twin change events, digital twin change events, and device connection state events to the built-in endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints). To learn more about the events sent from IoT Plug and Play devices, see [Understand IoT Plug and Play digital twins](../iot-develop/concepts-digital-twin.md).
* **Filtering data before routing it to various endpoints** by applying rich queries. Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. Learn more about using [queries in message routing](iot-hub-devguide-routing-query-syntax.md).
-IoT Hub needs write access to these service endpoints for message routing to work. If you configure your endpoints through the Azure portal, the necessary permissions are added for you. Make sure you configure your services to support the expected throughput. For example, if you are using Event Hubs as a custom endpoint, you must configure the **throughput units** for that event hub so it can handle the ingress of events you plan to send via IoT Hub message routing. Similarly, when using a Service Bus Queue as an endpoint, you must configure the **maximum size** to ensure the queue can hold all the data ingressed, until it is egressed by consumers. When you first configure your IoT solution, you may need to monitor your other endpoints and make any necessary adjustments for the actual load.
+IoT Hub needs write access to these service endpoints for message routing to work. If you configure your endpoints through the Azure portal, the necessary permissions are added for you. Make sure you configure your services to support the expected throughput. For example, if you're using Event Hubs as a custom endpoint, you must configure the **throughput units** for that event hub so it can handle the ingress of events you plan to send via IoT Hub message routing. Similarly, when using a Service Bus Queue as an endpoint, you must configure the **maximum size** to ensure the queue can hold all the data ingressed, until it's egressed by consumers. When you first configure your IoT solution, you may need to monitor your other endpoints and make any necessary adjustments for the actual load.
The IoT Hub defines a [common format](iot-hub-devguide-messages-construct.md) for all device-to-cloud messaging for interoperability across protocols. If a message matches multiple routes that point to the same endpoint, IoT Hub delivers message to that endpoint only once. Therefore, you don't need to configure deduplication on your Service Bus queue or topic. Use this tutorial to learn how to [configure message routing](tutorial-routing.md). ## Routing endpoints
-An IoT hub has a default built-in-endpoint (**messages/events**) that is compatible with Event Hubs. You can create [custom endpoints](iot-hub-devguide-endpoints.md#custom-endpoints) to route messages to by linking other services in your subscription to the IoT Hub.
+An IoT hub has a default built-in endpoint (**messages/events**) that is compatible with Event Hubs. You can create [custom endpoints](iot-hub-devguide-endpoints.md#custom-endpoints) to route messages to by linking other services in your subscription to the IoT hub.
Each message is routed to all endpoints whose routing queries it matches. In other words, a message can be routed to multiple endpoints.
IoT Hub currently supports the following endpoints:
## Built-in endpoint as a routing endpoint
-You can use standard [Event Hubs integration and SDKs](iot-hub-devguide-messages-read-builtin.md) to receive device-to-cloud messages from the built-in endpoint (**messages/events**). Once a Route is created, data stops flowing to the built-in-endpoint unless a Route is created to that endpoint. Even if no routes are created, a fallback route must be enabled to route messages to the built-in endpoint. The fallback is enabled by default if you create your hub using the portal or the CLI.
+You can use standard [Event Hubs integration and SDKs](iot-hub-devguide-messages-read-builtin.md) to receive device-to-cloud messages from the built-in endpoint (**messages/events**). Once a route is created, data stops flowing to the built-in endpoint unless a route is created to that endpoint. Even if no routes are created, a fallback route must be enabled to route messages to the built-in endpoint. The fallback is enabled by default if you create your hub using the portal or the CLI.
## Azure Storage as a routing endpoint There are two storage services IoT Hub can route messages to: [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) and [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (ADLS Gen2) accounts. Azure Data Lake Storage accounts are [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md)-enabled storage accounts built on top of blob storage. Both of these use blobs for their storage.
-IoT Hub supports writing data to Azure Storage in the [Apache Avro](https://avro.apache.org/) format and the JSON format. The default is AVRO. When using JSON encoding, you must set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these values are case-insensitive. If the content encoding is not set, then IoT Hub will write the messages in base 64 encoded format.
+IoT Hub supports writing data to Azure Storage in the [Apache Avro](https://avro.apache.org/) format and the JSON format. The default is AVRO. When using JSON encoding, you must set the contentType property to **application/json** and contentEncoding property to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these values are case-insensitive. If the content encoding isn't set, then IoT Hub will write the messages in base 64 encoded format.
-The encoding format can be only set when the blob storage endpoint is configured; it can't be edited for an existing endpoint. To switch encoding formats for an existing endpoint, you'll need to delete the endpoint and re-create it with the format you want. One helpful strategy might be to create a new custom endpoint with your desired encoding format and add a parallel route to that endpoint. In this way, you can verify your data before deleting the existing endpoint.
+The encoding format can be only set when the blob storage endpoint is configured; it can't be edited for an existing endpoint. To switch encoding formats for an existing endpoint, you'll need to first delete the endpoint, and then re-create it with the format you want. One helpful strategy might be to create a new custom endpoint with your desired encoding format and add a parallel route to that endpoint. In this way, you can verify your data before deleting the existing endpoint.
You can select the encoding format using the IoT Hub Create or Update REST API, specifically the [RoutingStorageContainerProperties](/rest/api/iothub/iothubresource/createorupdate#routingstoragecontainerproperties), the [Azure portal](https://portal.azure.com), [Azure CLI](/cli/azure/iot/hub/routing-endpoint), or [Azure PowerShell](/powershell/module/az.iothub/add-aziothubroutingendpoint). The following image shows how to select the encoding format in the Azure portal.
-![Blob storage endpoint encoding](./media/iot-hub-devguide-messages-d2c/blobencoding.png)
IoT Hub batches messages and writes data to storage whenever the batch reaches a certain size or a certain amount of time has elapsed. IoT Hub defaults to the following file naming convention:
IoT Hub batches messages and writes data to storage whenever the batch reaches a
{iothub}/{partition}/{YYYY}/{MM}/{DD}/{HH}/{mm} ```
-You may use any file naming convention, however you must use all listed tokens. IoT Hub will write to an empty blob if there is no data to write.
+You may use any file naming convention, however you must use all listed tokens. IoT Hub will write to an empty blob if there's no data to write.
We recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a [Microsoft-initiated failover](iot-hub-ha-dr.md#microsoft-initiated-failover) or IoT Hub [manual failover](iot-hub-ha-dr.md#manual-failover). You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/path) for the list of files. See the following sample as guidance.
public void ListBlobsInContainer(string containerName, string iothub)
} ```
-To create an Azure Data Lake Gen2-compatible storage account, create a new V2 storage account and select *enabled* on the *Hierarchical namespace* field on the **Advanced** tab as shown in the following image:
+To create an Azure Data Lake Gen2-compatible storage account, create a new V2 storage account and select **Enable hierarchical namespace** from the **Data Lake Storage Gen2** section of the **Advanced** tab, as shown in the following image:
-![Select Azure Date Lake Gen2 storage](./media/iot-hub-devguide-messages-d2c/selectadls2storage.png)
## Service Bus Queues and Service Bus Topics as a routing endpoint
Service Bus queues and topics used as IoT Hub endpoints must not have **Sessions
Apart from the built-in-Event Hubs compatible endpoint, you can also route data to custom endpoints of type Event Hubs. ## Azure Cosmos DB as a routing endpoint (preview)
-You can send data directly to Azure Cosmos DB from IoT Hub. Cosmos DB is a fully managed hyperscale multi-model database service. It provides very low latency and high availability, making it a great choice for scenarios like connected solutions and manufacturing which require extensive downstream data analysis.
-IoT Hub supports writing to Cosmos DB in JSON (if specified in the message content-type) or as Base64 encoded binary. In order to set up a route to Cosmos DB, you will have to do the following:
+You can send data directly to Azure Cosmos DB from IoT Hub. Cosmos DB is a fully managed hyperscale multi-model database service. It provides low latency and high availability, making it a great choice for scenarios like connected solutions and manufacturing that require extensive downstream data analysis.
-From your provisioned IoT Hub, go to the Hub settings and click on message routing. Go to the Custom endpoints tab, click on Add and select Cosmos DB. The following image shows the endpoint addition:
+IoT Hub supports writing to Cosmos DB in JSON (if specified in the message content-type) or as Base64 encoded binary. You can set up a Cosmos DB endpoint for message routing by performing the following steps in the Azure portal:
-![Screenshot that shows how to add a Cosmos DB endpoint.](media/iot-hub-devguide-messages-d2c/add-cosmos-db-endpoint.png)
+1. Navigate to your provisioned IoT hub.
+1. In the resource menu, select **Message routing** from **Hub settings**.
+1. Select the **Custom endpoints** tab in the working pane, then select **Add** and choose **Cosmos DB (preview)** from the dropdown list.
-Enter your endpoint name. You should be able to choose from a list of Cosmos DB accounts available for selection, along with the Database and collection.
+ The following image shows the endpoint addition options in the working pane of Azure portal:
-As Cosmos DB is a hyperscale datastore, all data/documents written to it must contain a field that represents a logical partition. The partition key property name is defined at the Container level and cannot be changed once it has been set. Each logical partition has a maximum size of 20GB. To effectively support high-scale scenarios, you can enable [Synthetic Partition Keys](/azure/cosmos-db/nosql/synthetic-partition-keys) for the Cosmos DB endpoint and configure them based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its max limit of 20 GB within a month. In that case, you can define a Synthetic Partition Key which is a combination of the device id and the month. This key will be automatically added to the partition key field for each new Cosmos DB record, ensuring logical partitions are created each month for each device.
+ :::image type="content" alt-text="Screenshot that shows how to add a Cosmos DB endpoint." source="media/iot-hub-devguide-messages-d2c/add-cosmos-db-endpoint.png":::
+
+1. Type a name for your Cosmos DB endpoint in **Endpoint name**.
+1. In **Cosmos DB account**, choose an existing Cosmos DB account from a list of Cosmos DB accounts available for selection, then select an existing database and collection in **Database** and **Collection**, respectively.
+1. In **Generate a synthetic partition key for messages**, select **Enable** if needed.
+
+ To effectively support high-scale scenarios, you can enable [synthetic partition keys](/azure/cosmos-db/nosql/synthetic-partition-keys) for the Cosmos DB endpoint. As Cosmos DB is a hyperscale data store, all data/documents written to it must contain a field that represents a logical partition. Each logical partition has a maximum size of 20 GB. You can specify the partition key property name in **Partition key name**. The partition key property name is defined at the container level and can't be changed once it has been set.
+
+ You can configure the synthetic partition key value by specifying a template in **Partition key template** based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its maximum limit of 20 GB within a month. In that case, you can define a synthetic partition key as a combination of the device ID and the month. The generated partition key value is automatically added to the partition key property for each new Cosmos DB record, ensuring logical partitions are created each month for each device.
- You can choose any of the supported authentication types for accessing the database, based on your system setup.
+1. In **Authentication type**, choose an authentication type for your Cosmos DB endpoint. You can choose any of the supported authentication types for accessing the database, based on your system setup.
-> [!Caution]
-> If you are using the System managed identity for authenticating to CosmosDB, you will need to have a ΓÇ£Cosmos DB Built in Data ContributorΓÇ¥ Role assigned via CLI. The role setup is not supported from the portal today. For more details on the various roles, see [Configure role-based access for Azure Cosmos DB](/azure/cosmos-db/how-to-setup-rbac). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
+ > [!CAUTION]
+ > If you're using the system assigned managed identity for authenticating to Cosmos DB, you must use Azure CLI or Azure PowerShell to assign the Cosmos DB Built-in Data Contributor built-in role definition to the identity. Role assignment for Cosmos DB isn't currently supported from the Azure portal. For more information about the various roles, see [Configure role-based access for Azure Cosmos DB](/azure/cosmos-db/how-to-setup-rbac). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
-Once you have selected all the details, click on create and complete the setup of the custom endpoint.
+1. Select **Create** to complete the creation of your custom endpoint.
+
+To learn more about using the Azure portal to create message routes and endpoints for your IoT hub, see [Message routing with IoT Hub ΓÇö Azure portal](how-to-routing-portal.md).
## Reading data that has been routed
You can configure a route by following this [tutorial](tutorial-routing.md).
Use the following tutorials to learn how to read messages from an endpoint.
-* Reading from [Built-in-endpoint](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
+* Reading from a [built-in endpoint](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
* Reading from [Blob storage](../storage/blobs/storage-blob-event-quickstart.md)
Use the following tutorials to learn how to read messages from an endpoint.
## Fallback route
-The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in-Event Hubs (**messages/events**), that is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is turned on, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in-endpoint, unless a route is created to that endpoint. If there are no routes to the built-in-endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in-endpoint. Also, if all existing routes are deleted, fallback route must be enabled to receive all data at the built-in-endpoint.
+The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in endpoint (**messages/events**), which is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is enabled, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in endpoint, unless a route is created to that endpoint. If there are no routes to the built-in endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in endpoint. Also, if all existing routes are deleted, fallback route capability must be enabled to receive all data at the built-in endpoint.
-You can enable/disable the fallback route in the Azure portal->Message Routing blade. You can also use Azure Resource Manager for [FallbackRouteProperties](/rest/api/iothub/iothubresource/createorupdate#fallbackrouteproperties) to use a custom endpoint for fallback route.
+You can enable or disable the fallback route in the Azure portal, from the **Message routing** blade. You can also use Azure Resource Manager for [FallbackRouteProperties](/rest/api/iothub/iothubresource/createorupdate#fallbackrouteproperties) to use a custom endpoint for the fallback route.
## Non-telemetry events
-In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, digital twin change events, and device connection state events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **device connection state events**, IoT Hub sends a message indicating whether the device was connected or disconnected.
+In addition to device telemetry, message routing also enables sending non-telemetry events, including:
+
+* Device twin change events
+* Device lifecycle events
+* Device job lifecycle events
+* Digital twin change events
+* Device connection state events
+* MQTT broker messages
+
+For example, if a route is created with the data source set to **Device Twin Change Events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with the data source set to **Device Lifecycle Events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with the data source set to **Digital Twin Change Events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **Device Connection State Events**, IoT Hub sends a message indicating whether the device was connected or disconnected.
[IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario. ## Limitations for device connection state events
-Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
+Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these operations equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
-IoT Hub does not report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic 60 second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60 second window.
+IoT Hub doesn't report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic, 60-second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60-second window.
## Testing routes
-When you create a new route or edit an existing route, you should test the route query with a sample message. You can test individual routes or test all routes at once and no messages are routed to the endpoints during the test. Azure portal, Azure Resource Manager, Azure PowerShell, and Azure CLI can be used for testing. Outcomes help identify whether the sample message matched the query, message did not match the query, or test couldn't run because the sample message or query syntax are incorrect. To learn more, see [Test Route](/rest/api/iothub/iothubresource/testroute) and [Test all routes](/rest/api/iothub/iothubresource/testallroutes).
+When you create a new route or edit an existing route, you should test the route query with a sample message. You can test individual routes or test all routes at once and no messages are routed to the endpoints during the test. Azure portal, Azure Resource Manager, Azure PowerShell, and Azure CLI can be used for testing. Outcomes help identify whether the sample message matched or didn't match the query, or if the test couldn't run because the sample message or query syntax are incorrect. To learn more, see [Test Route](/rest/api/iothub/iothubresource/testroute) and [Test All Routes](/rest/api/iothub/iothubresource/testallroutes).
## Latency
-When you route device-to-cloud telemetry messages using built-in endpoints, there is a slight increase in the end-to-end latency after the creation of the first route.
+When you route device-to-cloud telemetry messages using built-in endpoints, there's a slight increase in the end-to-end latency after the creation of the first route.
-In most cases, the average increase in latency is less than 500 ms. However, the latency you experience can vary and can be higher depending on the tier of your IoT hub and your solution architecture. You can monitor the latency using **Routing: message latency for messages/events** or **d2c.endpoints.latency.builtIn.events** IoT Hub metric. Creating or deleting any route after the first one does not impact the end-to-end latency.
+In most cases, the average increase in latency is less than 500 milliseconds. However, the latency you experience can vary and can be higher depending on the tier of your IoT hub and your solution architecture. You can monitor the latency using the **Routing: message latency for messages/events** or **d2c.endpoints.latency.builtIn.events** IoT Hub metrics. Creating or deleting any route after the first one doesn't impact the end-to-end latency.
## Monitoring and troubleshooting
-IoT Hub provides several metrics related to routing and endpoints to give you an overview of the health of your hub and messages sent. For a list of all of the IoT Hub metrics broken out by functional category, see [Metrics in the Monitoring data reference](monitor-iot-hub-reference.md#metrics). You can track errors that occur during evaluation of a routing query and endpoint health as perceived by IoT Hub with the [**routes** category in IoT Hub resource logs](monitor-iot-hub-reference.md#routes). To learn more about using metrics and resource logs with IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md).
+IoT Hub provides several metrics related to routing and endpoints to give you an overview of the health of your hub and messages sent. For a list of all of the IoT Hub metrics broken out by functional category, see the [Metrics](monitor-iot-hub-reference.md#metrics) section of [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md). You can track errors that occur during evaluation of a routing query and endpoint health as perceived by IoT Hub with the [**routes** category in IoT Hub resource logs](monitor-iot-hub-reference.md#routes). To learn more about using metrics and resource logs with IoT Hub, see [Monitoring Azure IoT Hub](monitor-iot-hub.md).
-You can use the REST API [Get Endpoint Health](/rest/api/iothub/iothubresource/getendpointhealth#iothubresource_getendpointhealth) to get [health status](iot-hub-devguide-endpoints.md#custom-endpoints) of the endpoints.
+You can use the REST API [Get Endpoint Health](/rest/api/iothub/iothubresource/getendpointhealth#iothubresource_getendpointhealth) to get the [health status](iot-hub-devguide-endpoints.md#custom-endpoints) of the endpoints.
Use the [troubleshooting guide for routing](troubleshoot-message-routing.md) for more details and support for troubleshooting routing. ## Next steps
-* To learn how to create Message Routes, see [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md).
+* To learn how to create message routes, see [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md).
* [How to send device-to-cloud messages](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
iot-hub Iot Hub Devguide Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-protocols.md
Title: Azure IoT Hub communication protocols and ports | Microsoft Docs
-description: This article describes the supported communication protocols for device-to-cloud and cloud-to-device communications and the port numbers that must be open.
+description: This article describes the supported communication protocols for device-to-cloud and cloud-to-device communications and the port numbers that must be open for those protocols.
- Previously updated : 01/29/2018 Last updated : 11/21/2022
IoT Hub allows devices to use the following protocols for device-side communicat
* [MQTT](https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/mqtt-v3.1.1.pdf) * MQTT over WebSockets
-* [AMQP](https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-complete-v1.0-os.pdf)
+* [Advanced Message Queuing Protocol (AMQP)](https://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-complete-v1.0-os.pdf)
* AMQP over WebSockets * HTTPS
The following table provides the high-level recommendations for your choice of p
| Protocol | When you should choose this protocol | | | |
-| MQTT <br> MQTT over WebSocket |Use on all devices that do not require to connect multiple devices (each with its own per-device credentials) over the same TLS connection. |
-| AMQP <br> AMQP over WebSocket |Use on field and cloud gateways to take advantage of connection multiplexing across devices. |
-| HTTPS |Use for devices that cannot support other protocols. |
+| MQTT <br> MQTT over WebSockets | Use on all devices that don't require connection to multiple devices, each with its own per-device credentials, over the same TLS connection. |
+| AMQP <br> AMQP over WebSockets | Use on field and cloud gateways to take advantage of connection multiplexing across devices. |
+| HTTPS | Use for devices that can't support other protocols. |
Consider the following points when you choose your protocol for device-side communications:
-* **Cloud-to-device pattern**. HTTPS does not have an efficient way to implement server push. As such, when you are using HTTPS, devices poll IoT Hub for cloud-to-device messages. This approach is inefficient for both the device and IoT Hub. Under current HTTPS guidelines, each device should poll for messages every 25 minutes or more. Issuing more HTTPS receives results in IoT Hub throttling the requests. MQTT and AMQP support server push when receiving cloud-to-device messages. They enable immediate pushes of messages from IoT Hub to the device. If delivery latency is a concern, MQTT or AMQP are the best protocols to use. For rarely connected devices, HTTPS works as well.
+* **Cloud-to-device pattern**. HTTPS doesn't have an efficient way to implement server push. As such, when you're using HTTPS, devices poll IoT Hub for cloud-to-device messages. This approach is inefficient for both the device and IoT Hub. Under current HTTPS guidelines, each device should poll for messages every 25 minutes or more. Issuing more HTTPS receives results in IoT Hub throttling the requests. MQTT and AMQP support server push when receiving cloud-to-device messages. They enable immediate pushes of messages from IoT Hub to the device. If delivery latency is a concern, MQTT or AMQP are the best protocols to use. For rarely connected devices, HTTPS works as well.
-* **Field gateways**. MQTT and HTTPS support only a single device identity (device ID plus credentials) per TLS connection. For this reason, these protocols are not supported for [field gateway scenarios](iot-hub-devguide-endpoints.md#field-gateways) that require multiplexing messages using multiple device identities across a single or a pool of upstream connections to IoT Hub. Such gateways can use a protocol that supports multiple device identities per connection, like AMQP, for their upstream traffic.
+* **Field gateways**. MQTT and HTTPS support only a single device identity (device ID plus credentials) per TLS connection. For this reason, these protocols aren't supported for [field gateway scenarios](iot-hub-devguide-endpoints.md#field-gateways) that require multiplexing messages, using multiple device identities, across either a single connection or a pool of upstream connections to IoT Hub. Such gateways can use a protocol that supports multiple device identities per connection, like AMQP, for their upstream traffic.
-* **Low resource devices**. The MQTT and HTTPS libraries have a smaller footprint than the AMQP libraries. As such, if the device has limited resources (for example, less than 1-MB RAM), these protocols might be the only protocol implementation available.
+* **Low resource devices**. The MQTT and HTTPS libraries have a smaller footprint than the AMQP libraries. As such, if the device has limited resources (for example, less than 1 MB of RAM), these protocols might be the only protocol implementation available.
* **Network traversal**. The standard AMQP protocol uses port 5671, and MQTT listens on port 8883. Use of these ports could cause problems in networks that are closed to non-HTTPS protocols. Use MQTT over WebSockets, AMQP over WebSockets, or HTTPS in this scenario.
Devices can communicate with IoT Hub in Azure using various protocols. Typically
| Protocol | Port | | | |
-| MQTT |8883 |
-| MQTT over WebSockets |443 |
-| AMQP |5671 |
-| AMQP over WebSockets |443 |
-| HTTPS |443 |
+| MQTT | 8883 |
+| MQTT over WebSockets | 443 |
+| AMQP | 5671 |
+| AMQP over WebSockets | 443 |
+| HTTPS | 443 |
-The IP address of an IoT hub is subject to change without notice. To learn how to mitigate the effects of IoT hub IP address changes on your IoT solution and devices, see [IoT Hub IP address best practices](iot-hub-understand-ip-address.md#best-practices).
+The IP address of an IoT hub is subject to change without notice. To learn how to mitigate the effects of IoT hub IP address changes on your IoT solution and devices, see the [Best practices](iot-hub-understand-ip-address.md#best-practices) section of [IoT Hub IP addresses](iot-hub-understand-ip-address.md).
## Next steps
-To learn more about how IoT Hub implements the MQTT protocol, see [Communicate with your IoT hub using the MQTT protocol](iot-hub-mqtt-support.md).
+For more information about how IoT Hub implements the MQTT protocol, see [Communicate with your IoT hub using the MQTT protocol](iot-hub-mqtt-support.md).
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
Title: Azure IoT Hub SDKs | Microsoft Docs
-description: Links to the Azure IoT Hub SDKs which you can use to build device apps and back-end apps.
+description: Links to the Azure IoT Hub SDKs that you can use to build device apps and back-end apps.
Previously updated : 06/01/2021 Last updated : 11/18/2022
There are three categories of software development kits (SDKs) for working with IoT Hub:
-* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
+* [**IoT Hub device SDKs**](#azure-iot-hub-device-sdks) enable you to build apps that run on your IoT devices using the device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, jobs, methods, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use the module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
* [**IoT Hub service SDKs**](#azure-iot-hub-service-sdks) enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
The SDKs are available in **multiple languages** providing the flexibility to ch
| **C** | [packages](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#getting-the-sdk) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples) | [Reference](https://github.com/Azure/azure-iot-sdk-c/) | > [!WARNING]
-> The **C SDK** listed above is **not** suitable for embedded applications due to its memory management and threading model. For embedded devices, refer to the [Embedded device SDKs](#embedded-device-sdks).
+> The **C device SDK** listed in the previous table is **not** suitable for embedded applications due to its memory management and threading model. For embedded devices, refer to the [Embedded device SDKs](#embedded-device-sdks).
### Embedded device SDKs
-These SDKs were designed and created to run on devices with limited compute and memory resources and are implemented using the C language.
+These SDKs are designed and created to run on devices with limited compute and memory resources and are implemented using the C language.
The embedded device SDKs are available for **multiple operating systems** providing the flexibility to choose which best suits your team and scenario.
The embedded device SDKs are available for **multiple operating systems** provid
| **FreeRTOS** | FreeRTOS Middleware | [GitHub](https://github.com/Azure/azure-iot-middleware-freertos) | [Samples](https://github.com/Azure-Samples/iot-middleware-freertos-samples) | [Reference](https://azure.github.io/azure-iot-middleware-freertos) | | **Bare Metal** | Azure SDK for Embedded C | [GitHub](https://github.com/Azure/azure-sdk-for-c/tree/master/sdk/docs/iot) | [Samples](https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/samples/iot/README.md) | [Reference](https://azure.github.io/azure-sdk-for-c) |
-Learn more about the IoT Hub device SDKS in the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
+Learn more about the IoT Hub device SDKS in the [IoT device development documentation](../iot-develop/about-iot-sdks.md).
## Azure IoT Hub service SDKs
The Azure IoT service SDKs contain code to facilitate building applications that
## Azure IoT Hub management SDKs
-The Iot Hub management SDKs help you build backend applications that manage the IoT hubs in your Azure subscription.
+The IoT Hub management SDKs help you build backend applications that manage the IoT hubs in your Azure subscription.
| Platform | Package | Code repository | Reference | | --|--|--|--|
The Iot Hub management SDKs help you build backend applications that manage the
## SDK and hardware compatibility
-For more information about device SDK compatibility with specific hardware devices, see the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/) or individual repository.
+For more information about device SDK compatibility with specific hardware devices, see the [Azure Certified Device catalog](https://devicecatalog.azure.com/) or individual repository.
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
Azure IoT SDKs are also available for the following
* [Device Update for IoT Hub SDKs](../iot-hub-device-update/understand-device-update.md): To help you deploy over-the-air (OTA) updates for IoT devices.
-* [IoT Plug and Play SDKs](../iot-develop/libraries-sdks.md): To help you build IoT Plug and Play solutions.
+* [Microsoft SDKs for IoT Plug and Play](../iot-develop/libraries-sdks.md): To help you build IoT Plug and Play solutions.
## Next steps
iot-hub Iot Hub Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-scaling.md
Previously updated : 06/28/2019 Last updated : 11/21/2022 # Choose the right IoT Hub tier for your solution
-Every IoT solution is different, so Azure IoT Hub offers several options based on pricing and scale. This article is meant to help you evaluate your IoT Hub needs. For pricing information about IoT Hub tiers, see [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+Every IoT solution is different, so Azure IoT Hub offers several options based on pricing and scale. This article is meant to help you evaluate your IoT Hub needs. For pricing information about IoT Hub tiers, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
To decide which IoT Hub tier is right for your solution, ask yourself two questions: **What features do I plan to use?**
-Azure IoT Hub offers two tiers, basic and standard, that differ in the number of features they support. If your IoT solution is based around collecting data from devices and analyzing it centrally, then the basic tier is probably right for you. If you want to use more advanced configurations to control IoT devices remotely or distribute some of your workloads onto the devices themselves, then you should consider the standard tier. For a detailed breakdown of which features are included in each tier continue to [Basic and standard tiers](#basic-and-standard-tiers).
+Azure IoT Hub offers two tiers, basic and standard, that differ in the number of features they support. If your IoT solution is based around collecting data from devices and analyzing it centrally, then the basic tier is probably right for you. If you want to use more advanced configurations to control IoT devices remotely or distribute some of your workloads onto the devices themselves, then you should consider the standard tier. For a detailed breakdown of which features are included in each tier, continue to [Basic and standard tiers](#basic-and-standard-tiers).
**How much data do I plan to move daily?**
Each IoT Hub tier is available in three sizes, based around how much data throug
The standard tier of IoT Hub enables all features, and is required for any IoT solutions that want to make use of the bi-directional communication capabilities. The basic tier enables a subset of the features and is intended for IoT solutions that only need uni-directional communication from devices to the cloud. Both tiers offer the same security and authentication features.
-Only one type of [edition](https://azure.microsoft.com/pricing/details/iot-hub/) within a tier can be chosen per IoT Hub. For example, you can create an IoT Hub with multiple units of S1, but not with a mix of units from different editions, such as S1 and S2.
+Only one type of [IoT Hub edition](https://azure.microsoft.com/pricing/details/iot-hub/) within a tier can be chosen per IoT hub. For example, you can create an IoT hub with multiple units of S1. However, you can't create an IoT hub with a mix of units from different editions, such as S1 and B3 or S1 and S2.
-| Capability | Basic tier | Free/Standard tier |
+| Capability | Basic tier | Standard/Free tier |
| - | - | - | | [Device-to-cloud telemetry](iot-hub-devguide-messaging.md) | Yes | Yes | | [Per-device identity](iot-hub-devguide-identity-registry.md) | Yes | Yes |
Only one type of [edition](https://azure.microsoft.com/pricing/details/iot-hub/)
| [Device Provisioning Service](../iot-dps/about-iot-dps.md) | Yes | Yes | | [Monitoring and diagnostics](monitor-iot-hub.md) | Yes | Yes | | [Cloud-to-device messaging](iot-hub-devguide-c2d-guidance.md) | | Yes |
-| [Device twins](iot-hub-devguide-device-twins.md), [Module twins](iot-hub-devguide-module-twins.md), and [Device management](iot-hub-device-management-overview.md) | | Yes |
+| [Device twins](iot-hub-devguide-device-twins.md), [module twins](iot-hub-devguide-module-twins.md), and [device management](iot-hub-device-management-overview.md) | | Yes |
| [Device streams (preview)](iot-hub-device-streams-overview.md) | | Yes | | [Azure IoT Edge](../iot-edge/about-iot-edge.md) | | Yes | | [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) | | Yes |
-IoT Hub also offers a free tier that is meant for testing and evaluation. It has all the capabilities of the standard tier, but limited messaging allowances. You cannot upgrade from the free tier to either basic or standard.
+IoT Hub also offers a free tier that is meant for testing and evaluation. It has all the capabilities of the standard tier, but includes limited messaging allowances. You can't upgrade from the free tier to either the basic or standard tier.
## Partitions
-Azure IoT Hubs contain many core components of [Azure Event Hubs](../event-hubs/event-hubs-features.md), including [Partitions](../event-hubs/event-hubs-features.md#partitions). Event streams for IoT Hubs are generally populated with incoming telemetry data that is reported by various IoT devices. The partitioning of the event stream is used to reduce contentions that occur when concurrently reading and writing to event streams.
+Azure IoT hubs contain many core components from [Azure Event Hubs](../event-hubs/event-hubs-features.md), including [partitions](../event-hubs/event-hubs-features.md#partitions). Event streams for IoT hubs are populated with incoming telemetry data that is reported by various IoT devices. The partitioning of the event stream is used to reduce contentions that occur when concurrently reading and writing to event streams.
-The partition limit is chosen when IoT Hub is created, and cannot be changed. The maximum partition limit for basic tier IoT Hub and standard tier IoT Hub is 32. Most IoT hubs only need 4 partitions. For more information on determining the partitions, see the Event Hubs FAQ [How many partitions do I need?](../event-hubs/event-hubs-faq.yml#how-many-partitions-do-i-need-)
+The partition limit is chosen when an IoT hub is created, and can't be changed. The maximum limit of device-to-cloud partitions for basic tier and standard tier IoT hubs is 32. Most IoT hubs only need four partitions. For more information on determining the partitions, see the [How many partitions do I need?](../event-hubs/event-hubs-faq.yml#how-many-partitions-do-i-need-) question in the [FAQ](../event-hubs/event-hubs-faq.yml) for [Azure Event Hubs](../event-hubs/index.yml).
## Tier upgrade
Once you create your IoT hub, you can upgrade from the basic tier to the standar
The partition configuration remains unchanged when you migrate from basic tier to standard tier. > [!NOTE]
-> The free tier does not support upgrading to basic or standard.
+> The free tier does not support upgrading to basic or standard tier.
## IoT Hub REST APIs
-The difference in supported capabilities between the basic and standard tiers of IoT Hub means that some API calls do not work with basic tier hubs. The following table shows which APIs are available:
+The difference in supported capabilities between the basic and standard tiers of IoT Hub means that some API calls don't work with basic tier IoT hubs. The following table shows which APIs are available:
-| API | Basic tier | Free/Standard tier |
+| API | Basic tier | Standard/Free tier |
| | - | - | | [Delete device](/javascript/api/azure-iot-digitaltwins-service/registrymanager#azure-iot-digitaltwins-service-registrymanager-deletedevice) | Yes | Yes | | [Get device](/rest/api/iothub/service/devices/get-identity) | Yes | Yes |
The best way to size an IoT Hub solution is to evaluate the traffic on a per-uni
* Cloud-to-device messages * Identity registry operations
-Traffic is measured for your IoT hub on a per-unit basis. When you create an IoT hub, you choose its tier and edition, and set the number of units available. You can purchase up to 200 units for the B1, B2, S1, or S2 edition, or up to 10 units for the B3 or S3 edition. After your IoT hub is created, you can change the number of units available within its edition, upgrade or downgrade between editions within its tier (B1 to B2), or upgrade from the basic to the standard tier (B1 to S1) without interrupting your existing operations. For more information, see [How to upgrade your IoT hub](iot-hub-upgrade.md).
+Traffic is measured for your IoT hub on a per-unit basis. When you create an IoT hub, you choose its tier and edition, and set the number of units available. You can purchase up to 200 units for the B1, B2, S1, or S2 edition, or up to 10 units for the B3 or S3 edition. After you create your IoT hub, without interrupting your existing operations, you can:
+
+- Change the number of units available within its edition (for example, upgrading from one to three units of B1)
+- Upgrade or downgrade between editions within its tier (for example, upgrading from B1 to B2)
+- Upgrade from the basic to the standard tier (for example, upgrading from B1 to S1)
+
+For more information, see [How to upgrade your IoT hub](iot-hub-upgrade.md).
As an example of each tier's traffic capabilities, device-to-cloud messages follow these sustained throughput guidelines:
As an example of each tier's traffic capabilities, device-to-cloud messages foll
| B2, S2 |Up to 16 MB/minute per unit<br/>(22.8 GB/day/unit) |Average of 4,167 messages/minute per unit<br/>(6 million messages/day per unit) | | B3, S3 |Up to 814 MB/minute per unit<br/>(1144.4 GB/day/unit) |Average of 208,333 messages/minute per unit<br/>(300 million messages/day per unit) |
-Device-to-cloud throughput is only one of the metrics you need to consider when designing an IoT solution. For more comprehensive information, see [IoT Hub quotas and throttles](iot-hub-devguide-quotas-throttling.md).
+Device-to-cloud throughput is only one of the metrics you need to consider when designing an IoT solution. For more comprehensive information, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
### Identity registry operation throughput
-IoT Hub identity registry operations are not supposed to be run-time operations, as they are mostly related to device provisioning.
+IoT Hub identity registry operations aren't supposed to be run-time operations, as they're mostly related to device provisioning.
-For specific burst performance numbers, see [IoT Hub quotas and throttles](iot-hub-devguide-quotas-throttling.md).
+For more information about specific burst performance numbers, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
## Auto-scale
-If you are approaching the allowed message limit on your IoT hub, you can use these [steps to automatically scale](https://azure.microsoft.com/resources/samples/iot-hub-dotnet-autoscale/) to increment an IoT Hub unit in the same IoT Hub tier.
+If you're approaching the allowed message limit on your IoT hub, you can use these [steps to automatically scale](https://azure.microsoft.com/resources/samples/iot-hub-dotnet-autoscale/) to increment an IoT Hub unit in the same IoT Hub tier.
## Next steps
-* For more information about IoT Hub capabilities and performance details, see [IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub) or [IoT Hub quotas and throttles](iot-hub-devguide-quotas-throttling.md).
+* For more information about IoT Hub capabilities and performance details, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub) or [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
-* To change your IoT Hub tier, follow the steps in [Upgrade your IoT hub](iot-hub-upgrade.md).
+* To change your IoT Hub tier, follow the steps in [How to upgrade your IoT hub](iot-hub-upgrade.md).
iot-hub Query Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-jobs.md
Here's a sample IoT hub device twin that is part of a job called **myJobId**:
{ "deviceId": "myDeviceId", "jobId": "myJobId",
- "jobType": "scheduleTwinUpdate",
+ "jobType": "scheduleUpdateTwin",
"status": "completed", "startTimeUtc": "2016-09-29T18:18:52.7418462", "endTimeUtc": "2016-09-29T18:20:52.7418462",
Here's a sample IoT hub device twin that is part of a job called **myJobId**:
Currently, this collection is queryable as **devices.jobs** in the IoT Hub query language. > [!IMPORTANT]
-> Currently, the jobs property is never returned when querying device twins. That is, queries that contain `FROM devices`. The jobs property can only be accessed directly with queries using `FROM devices.jobs`.
+> Currently, the jobs property is not returned when querying device twins. That is, queries that contain `FROM devices`. The jobs property can only be accessed directly with queries using `FROM devices.jobs`.
For example, the following query returns all jobs (past and scheduled) that affect a single device:
For example, the following query retrieves all completed device twin update jobs
```sql SELECT * FROM devices.jobs WHERE devices.jobs.deviceId = 'myDeviceId'
- AND devices.jobs.jobType = 'scheduleTwinUpdate'
+ AND devices.jobs.jobType = 'scheduleUpdateTwin'
AND devices.jobs.status = 'completed' AND devices.jobs.createdTimeUtc > '2016-09-01' ```
iot-hub Query Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/query-twins.md
SELECT * FROM devices.modules
## Twin query limitations > [!IMPORTANT]
-> Query results can have a few minutes of delay with respect to the latest values in device twins. If querying individual device twins by ID, use the [get twin REST API](/jav#azure-iot-hub-service-sdks).
+> Query results are eventually consistent operations and delays of up to 30 minutes should be tolerated. In most instances, twin query returns results in the order of a few seconds. IoT Hub strives to provide low latency for all operations. However, due to network conditions and other unpredictable factors it cannot guarantee a certain latency.
+
+An alternative option to twin queries is to query individual device twins by ID by using the [get twin REST API](/jav#azure-iot-hub-service-sdks).
Query expressions can have a maximum length of 8192 characters. Currently, comparisons are supported only between primitive types (no objects), for instance `... WHERE properties.desired.config = properties.reported.config` is supported only if those properties have primitive values.
+We recommend to not take a dependency on lastActivityTime found in Device Identity Properties for Twin Queries for any scenario. This field does not guarantee an accurate gauge of device status. Instead, please use IoT Device Lifecycle events to manage device state and activities. More information on how to use IoT Hub Lifecycle events in your solution, please visit [React to IoT Hub events by using Event Grid to trigger actions](/azure/iot-hub/iot-hub-event-grid).
++
+> [!Note]
+> Avoid making any assumptions about the maximum latency of this operation. Please refer to [Latency Solutions](/azure/iot-hub/iot-hub-devguide-quotas-throttling) for more information on how to build your solution taking latency into account.
+ ## Next steps * Understand the basics of the [IoT Hub query language](iot-hub-devguide-query-language.md)++
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-byok.md
tags: azure-resource-manager
Previously updated : 02/04/2021 Last updated : 11/21/2022
For more information, and for a tutorial to get started using Key Vault (includi
Here's an overview of the process. Specific steps to complete are described later in the article.
-* In Key Vault, generate a key (referred to as a *Key Exchange Key* (KEK)). The KEK must be an RSA-HSM key that has only the `import` key operation. Only Key Vault Premium SKU supports RSA-HSM keys.
+* In Key Vault, generate a key (referred to as a *Key Exchange Key* (KEK)). The KEK must be an RSA-HSM key that has only the `import` key operation. Only Key Vault Premium and Managed HSM support RSA-HSM keys.
* Download the KEK public key as a .pem file. * Transfer the KEK public key to an offline computer that is connected to an on-premises HSM. * In the offline computer, use the BYOK tool provided by your HSM vendor to create a BYOK file.
The following table lists prerequisites for using BYOK in Azure Key Vault:
| Requirement | More information | | | | | An Azure subscription |To create a key vault in Azure Key Vault, you need an Azure subscription. [Sign up for a free trial](https://azure.microsoft.com/pricing/free-trial/). |
-| A Key Vault Premium SKU to import HSM-protected keys |For more information about the service tiers and capabilities in Azure Key Vault, see [Key Vault Pricing](https://azure.microsoft.com/pricing/details/key-vault/). |
+| A Key Vault Premium or Managed HSM to import HSM-protected keys |For more information about the service tiers and capabilities in Azure Key Vault, see [Key Vault Pricing](https://azure.microsoft.com/pricing/details/key-vault/). |
| An HSM from the supported HSMs list and a BYOK tool and instructions provided by your HSM vendor | You must have permissions for an HSM and basic knowledge of how to use your HSM. See [Supported HSMs](#supported-hsms). | | Azure CLI version 2.1.0 or later | See [Install the Azure CLI](/cli/azure/install-azure-cli).|
The following table lists prerequisites for using BYOK in Azure Key Vault:
||EC|P-256<br />P-384<br />P-521|Vendor HSM|The key to be transferred to the Azure Key Vault HSM| ||||
-## Generate and transfer your key to the Key Vault HSM
+## Generate and transfer your key to Key Vault Premium HSM or Managed HSM
-To generate and transfer your key to a Key Vault HSM:
+To generate and transfer your key to a Key Vault Premium or Managed HSM:
* [Step 1: Generate a KEK](#step-1-generate-a-kek) * [Step 2: Download the KEK public key](#step-2-download-the-kek-public-key)
To generate and transfer your key to a Key Vault HSM:
### Step 1: Generate a KEK
-A KEK is an RSA key that's generated in a Key Vault HSM. The KEK is used to encrypt the key you want to import (the *target* key).
+A KEK is an RSA key that's generated in a Key Vault Premium or Managed HSM. The KEK is used to encrypt the key you want to import (the *target* key).
The KEK must be: - An RSA-HSM key (2,048-bit; 3,072-bit; or 4,096-bit)
Use the [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create)
```azurecli az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --vault-name ContosoKeyVaultHSM ```
+or for Managed HSM
+
+```azurecli
+az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --hsm-name ContosoKeyVaultHSM
+```
### Step 2: Download the KEK public key
Use [az keyvault key download](/cli/azure/keyvault/key#az-keyvault-key-download)
az keyvault key download --name KEKforBYOK --vault-name ContosoKeyVaultHSM --file KEKforBYOK.publickey.pem ```
+or for Managed HSM
+
+```azurecli
+az keyvault key download --name KEKforBYOK --hsm-name ContosoKeyVaultHSM --file KEKforBYOK.publickey.pem
+```
+ Transfer the KEKforBYOK.publickey.pem file to your offline computer. You will need this file in the next step. ### Step 3: Generate and prepare your key for transfer
To import an RSA key use following command. Parameter --kty is optional and defa
```azurecli az keyvault key import --vault-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file KeyTransferPackage-ContosoFirstHSMkey.byok ```
+or for Managed HSM
+
+```azurecli
+az keyvault key import --hsm-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file KeyTransferPackage-ContosoFirstHSMkey.byok
+```
To import an EC key, you must specify key type and the curve name.
To import an EC key, you must specify key type and the curve name.
az keyvault key import --vault-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file --kty EC-HSM --curve-name "P-256" KeyTransferPackage-ContosoFirstHSMkey.byok ```
+or for Managed HSM
+
+```azurecli
+az keyvault key import --hsm-name ContosoKeyVaultHSM --name ContosoFirstHSMkey --byok-file --kty EC-HSM --curve-name "P-256" KeyTransferPackage-ContosoFirstHSMkey.byok
+```
+ If the upload is successful, Azure CLI displays the properties of the imported key. ## Next steps
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
The following diagram shows the dependencies between your logic app project and
## Deploy logic app resources (zip deploy)
-After you push your logic app project to your source repository, you can set up build and release pipelines that deploy logic apps to infrastructure either inside or outside Azure.
+After you push your logic app project to your source repository, you can set up build and release pipelines either inside or outside Azure that deploy logic apps to infrastructure.
### Build your project
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
1. First, let's connect to Azure Machine Learning workspace where we are going to work on.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
az account set --subscription <subscription> az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
2. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
MODEL_NAME='heart-classifier' az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model" ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python model_name = 'heart-classifier'
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
3. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure ML compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an AzureML compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
Create a compute definition `YAML` like the following one: __cpu-cluster.yml__
+
```yaml $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json name: cluster-cpu
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
Create the compute using the following command:
- ```bash
+ ```azurecli
az ml compute create -f cpu-cluster.yml ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
To create a new compute cluster where to create the deployment, use the following script:
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
4. Now it is time to create the batch endpoint and deployment. Let's start with the endpoint first. Endpoints only require a name and a description to be created:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
To create a new endpoint, create a `YAML` configuration like the following:
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
Then, create the endpoint with the following command:
- ```bash
+ ```azurecli
ENDPOINT_NAME='heart-classifier-batch'
- az ml batch-endpoint create -f endpoint.yml
+ az ml batch-endpoint create -n $ENDPOINT_NAME -f endpoint.yml
```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
To create a new endpoint, use the following script:
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
name="heart-classifier-batch", description="A heart condition classifier for batch inference", )
+ ```
+
+ Then, create the endpoint with the following command:
+
+ ```python
ml_client.batch_endpoints.begin_create_or_update(endpoint) ``` 5. Now, let create the deployment. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, you can specify them if you want to customize how the deployment does inference.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
Then, create the deployment with the following command:
- ```bash
+ ```azurecli
DEPLOYMENT_NAME="classifier-xgboost-mlflow"
- az ml batch-endpoint create -f endpoint.yml
+ az ml batch-deployment create -n $DEPLOYMENT_NAME -f endpoint.yml
```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
- To create a new deployment under the created endpoint, use the following script:
+ To create a new deployment under the created endpoint, first define the deployment:
```python deployment = BatchDeployment(
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
retry_settings=BatchRetrySettings(max_retries=3, timeout=300), logging_level="info", )
+ ```
+
+ Then, create the deployment with the following command:
+
+ ```python
ml_client.batch_deployments.begin_create_or_update(deployment) ```
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
6. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python
+ endpoint = ml_client.batch_endpoints.get(endpoint.name)
endpoint.defaults.deployment_name = deployment.name ml_client.batch_endpoints.begin_create_or_update(endpoint) ```
For testing our endpoint, we are going to use a sample of unlabeled data located
1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- Create a data asset definition in `YAML`:
+ a. Create a data asset definition in `YAML`:
__heart-dataset-unlabeled.yml__ ```yaml
For testing our endpoint, we are going to use a sample of unlabeled data located
path: heart-classifier-mlflow/data ```
- Then, create the data asset:
+ b. Create the data asset:
- ```bash
+ ```azurecli
az ml data create -f heart-dataset-unlabeled.yml ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
+
+ a. Create a data asset definition:
```python data_path = "heart-classifier-mlflow/data"
For testing our endpoint, we are going to use a sample of unlabeled data located
description="An unlabeled dataset for heart classification", name=dataset_name, )
+ ```
+
+ b. Create the data asset:
+
+ ```python
ml_client.data.create_or_update(heart_dataset_unlabeled) ```
+ c. Refresh the object to reflect the changes:
+
+ ```python
+ heart_dataset_unlabeled = ml_client.data.get(name=dataset_name)
+ ```
+
2. Now that the data is uploaded and ready to be used, let's invoke the endpoint:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name') ``` > [!NOTE] > The utility `jq` may not be installed on every installation. You can get installation instructions in [this link](https://stedolan.github.io/jq/download/).
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
For testing our endpoint, we are going to use a sample of unlabeled data located
3. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
- ```bash
+ ```azurecli
az ml job show --name $JOB_NAME ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
```python ml_client.jobs.get(job.name)
The file is structured as follows:
You can download the results of the job by using the job name:
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/cli)
To download the predictions, use the following command:
-```bash
+```azurecli
az ml job download --name $JOB_NAME --output-name score --download-path ./ ```
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/sdk)
```python ml_client.jobs.download(name=job.name, output_name='score', download_path='./')
Use the following steps to deploy an MLflow model with a custom scoring script.
> [!IMPORTANT] > This example uses a conda environment specified at `/heart-classifier-mlflow/environment/conda.yaml`. This file was created by combining the original MLflow conda dependencies file and adding the package `azureml-core`. __You can't use the `conda.yml` file from the model directly__.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file.
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
Let's get a reference to the environment:
Use the following steps to deploy an MLflow model with a custom scoring script.
1. Let's create the deployment now:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/cli)
To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
Use the following steps to deploy an MLflow model with a custom scoring script.
Then, create the deployment with the following command:
- ```bash
- az ml batch-endpoint create -f endpoint.yml
+ ```azurecli
+ az ml batch-deployment create -f deployment.yml
```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/sdk)
To create a new deployment under the created endpoint, use the following script:
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Traffic to one deployment can also be mirrored (copied) to another deployment. M
:::image type="content" source="media/concept-endpoints/endpoint-concept-mirror.png" alt-text="Diagram showing an endpoint mirroring traffic to a deployment.":::
-Learn how to [safely rollout to online endpoints](how-to-safely-rollout-managed-endpoints.md).
+Learn how to [safely rollout to online endpoints](how-to-safely-rollout-online-endpoints.md).
### Application Insights integration
The following table highlights the key differences between managed online endpoi
| **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported | | **Virtual Network (VNET)** | [Supported](how-to-secure-online-endpoint.md) (preview) | Supported | | **View costs** | [Endpoint and deployment level](how-to-view-online-endpoints-costs.md) | Cluster level |
-| **Mirrored traffic** | [Supported](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported |
+| **Mirrored traffic** | [Supported](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported |
| **No-code deployment** | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) | ### Managed online endpoints
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
When deploying to an online endpoint, you can use controlled rollout to enable t
* Perform A/B testing by routing traffic to different deployments within the endpoint. * Switch between endpoint deployments by updating the traffic percentage in endpoint configuration.
-For more information, see [Controlled rollout of machine learning models](./how-to-safely-rollout-managed-endpoints.md).
+For more information, see [Controlled rollout of machine learning models](./how-to-safely-rollout-online-endpoints.md).
### Analytics
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
For code-based training experiences, you control which Azure Machine Learning en
* [Azure Machine Learning Base Images Repository](https://github.com/Azure/AzureML-Containers) * [Data Science Virtual Machine release notes](./data-science-virtual-machine/release-notes.md)
-* [AzureML Python SDK Release Notes](./azure-machine-learning-release-notes.md)
+* [AzureML Python SDK Release Notes](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ml/azure-ai-ml/CHANGELOG.md)
* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
msi_client.user_assigned_identities.delete(
## Next steps * [Deploy and score a machine learning model by using a online endpoint](how-to-deploy-managed-online-endpoints.md).
-* For more on deployment, see [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md).
+* For more on deployment, see [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md).
* For more information on using the CLI, see [Use the CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md). * To see which compute resources you can use, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). * For more on costs, see [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md).
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
When you create a data asset in Azure Machine Learning, you'll need to specify a
> [!NOTE] > When you create a data asset from a local path, it will be automatically uploaded to the default Azure Machine Learning datastore in the cloud.
+> [!IMPORTANT]
+> The studio only supports browsing of credential-less ADLS Gen 2 datastores.
+ ## Data asset types - [**URIs**](#Create a `uri_folder` data asset) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`.
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
You can learn to deploy to managed online endpoints with SDK more in [Deploy mac
## Next steps - [Troubleshooting online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
## Next steps -- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [Troubleshooting online endpoints deployment](./how-to-troubleshoot-online-endpoints.md) - [Torch serve sample](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-torchserve-densenet.sh)
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
The `begin_create_or_update` method also works with local deployments. Use the s
> The above is an example of inplace rolling update. > * For managed online endpoint, the same deployment is updated with the new configuration, with 20% nodes at a time, i.e. if the deployment has 10 nodes, 2 nodes at a time will be updated. > * For Kubernetes online endpoint, the system will iterately create a new deployment instance with the new configuration and delete the old one.
-> * For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-managed-endpoints.md), which offers a safer alternative.
+> * For production usage, you might want to consider [blue-green deployment](how-to-safely-rollout-online-endpoints.md), which offers a safer alternative.
### (Optional) Configure autoscaling
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
To learn more, review these articles:
- [Deploy models with REST](how-to-deploy-with-rest.md) - [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
To learn more, review these articles:
- [Deploy models with REST](how-to-deploy-with-rest.md) - [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md
If you aren't going use the deployment, you should delete it with the below comm
* Learn to [Troubleshoot online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md) * Learn how to [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) * Learn how to [monitor online endpoints](how-to-monitor-online-endpoints.md).
-* Learn [safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md).
+* Learn [safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md).
* [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md). * [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). * Learn about limits on managed online endpoints in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
To learn more, review these articles:
- [Deploy models with REST](how-to-deploy-with-rest.md) - [Create and use managed online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)-- [Safe rollout for online endpoints ](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints ](how-to-safely-rollout-online-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with a managed online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
az group delete --resource-group <resource-group-name>
## Next steps -- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
instance_count = ceil(concurrent_requests / max_concurrent_requests_per_instance
## Next steps - [Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)
- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
machine-learning Reference Yaml Endpoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `identity` | object | The managed identity configuration for accessing Azure resources for endpoint provisioning and inference. | | | | `identity.type` | string | The type of managed identity. If the type is `user_assigned`, the `identity.user_assigned_identities` property must also be specified. | `system_assigned`, `user_assigned` | | | `identity.user_assigned_identities` | array | List of fully qualified resource IDs of the user-assigned identities. | | |
-| `traffic` | object | Traffic represents the percentage of requests to be served by different deployments. It's represented by a dictionary of key-value pairs, where keys represent the deployment name and value represent the percentage of traffic to that deployment. For example, `blue: 90 green: 10` means 90% requests are sent to the deployment named `blue` and 10% is sent to deployment `green`. Total traffic has to either be 0 or sum up to 100. See [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md) to see the traffic configuration in action. <br><br> Note: you can't set this field during online endpoint creation, as the deployments under that endpoint must be created before traffic can be set. You can update the traffic for an online endpoint after the deployments have been created using `az ml online-endpoint update`; for example, `az ml online-endpoint update --name <endpoint_name> --traffic "blue=90 green=10"`. | | |
+| `traffic` | object | Traffic represents the percentage of requests to be served by different deployments. It's represented by a dictionary of key-value pairs, where keys represent the deployment name and value represent the percentage of traffic to that deployment. For example, `blue: 90 green: 10` means 90% requests are sent to the deployment named `blue` and 10% is sent to deployment `green`. Total traffic has to either be 0 or sum up to 100. See [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md) to see the traffic configuration in action. <br><br> Note: you can't set this field during online endpoint creation, as the deployments under that endpoint must be created before traffic can be set. You can update the traffic for an online endpoint after the deployments have been created using `az ml online-endpoint update`; for example, `az ml online-endpoint update --name <endpoint_name> --traffic "blue=90 green=10"`. | | |
| `public_network_access` | string | This flag controls the visibility of the managed endpoint. When `disabled`, inbound scoring requests are received using the [private endpoint of the Azure Machine Learning workspace](how-to-configure-private-link.md) and the endpoint can't be reached from public networks. This flag is applicable only for managed endpoints | `enabled`, `disabled` | `enabled` |
-| `mirror_traffic` | string | Percentage of live traffic to mirror to a deployment. Mirroring traffic doesn't change the results returned to clients. The mirrored percentage of traffic is copied and submitted to the specified deployment so you can gather metrics and logging without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors. It's represented by a dictionary with a single key-value pair, where the key represents the deployment name and the value represents the percentage of traffic to mirror to the deployment. For more information, see [Test a deployment with mirrored traffic](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview).
+| `mirror_traffic` | string | Percentage of live traffic to mirror to a deployment. Mirroring traffic doesn't change the results returned to clients. The mirrored percentage of traffic is copied and submitted to the specified deployment so you can gather metrics and logging without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors. It's represented by a dictionary with a single key-value pair, where the key represents the deployment name and the value represents the percentage of traffic to mirror to the deployment. For more information, see [Test a deployment with mirrored traffic](how-to-safely-rollout-online-endpoints.md#test-the-deployment-with-mirrored-traffic-preview).
## Remarks
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
+
+ Title: Python SDK release notes
+
+description: Learn about the latest updates to Azure Machine Learning Python SDK.
+++++++ Last updated : 10/25/2022++
+# Azure Machine Learning Python SDK release notes
+
+In this article, learn about Azure Machine Learning Python SDK releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro) reference page.
+
+__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader:
+`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
++
+## 2022-10-25
+
+### Azure Machine Learning SDK for Python v1.47.0
+ + **azureml-automl-dnn-nlp**
+ + Runtime changes for AutoML NLP to account for fixed training parameters, as part of the newly introduced model sweeping and hyperparameter tuning.
+ + **azureml-mlflow**
+ + AZUREML_ARTIFACTS_DEFAULT_TIMEOUT can be used to control the timeout for artifact upload
+ + **azureml-train-automl-runtime**
+ + Many Models and Hierarchical Time Series training now enforces check on timeout parameters to detect conflict before submitting the experiment for run. This will prevent experiment failure during the run by raising exception before submitting experiment.
+ + Customers can now control the step size while using rolling forecast in Many Models inference.
+ + ManyModels inference with unpartitioned tabular data now supports forecast_quantiles.
+
+## 2022-09-26
+
+### Azure Machine Learning SDK for Python v1.46.0
+ + **azureml-automl-dnn-nlp**
+ + Customers will no longer be allowed to specify a line in CoNLL which only comprises with a token. The line must always either be an empty newline or one with exactly one token followed by exactly one space followed by exactly one label.
+ + **azureml-contrib-automl-dnn-forecasting**
+ + There is a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
+ + **azureml-core**
+ + Added deprecation warning when inference customers use CLI/SDK v1 model deployment APIs to deploy models and also when Python version is 3.6 and less.
+ + The following values of `AZUREML_LOG_DEPRECATION_WARNING_ENABLED` change the behavior as follows:
+ + Default - displays the warning when customer uses Python 3.6 and less and for cli/sdk v1.
+ + `True` - displays the sdk v1 deprecation warning on azureml-sdk packages.
+ + `False` - disables the sdk v1 deprecation warning on azureml-sdk packages.
+ + Command to be executed to set the environment variable to disable the deprecation message:
+ + Windows - `setx AZUREML_LOG_DEPRECATION_WARNING_ENABLED "False"`
+ + Linux - `export AZUREML_LOG_DEPRECATION_WARNING_ENABLED="False"`
+ + **azureml-interpret**
+ + update azureml-interpret package to interpret-community 0.27.*
+ + **azureml-pipeline-core**
+ + Fix schedule default time zone to UTC.
+ + Fix incorrect reuse when using SqlDataReference in DataTransfer step.
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package and curated images to raiwidgets and responsibleai v0.22.0
+ + **azureml-train-automl-runtime**
+ + Fixed a bug in generated scripts that caused certain metrics to not render correctly in ui.
+ + Many Models now supports rolling forecast for inferencing.
+ + Support to return top `N` models in Many models scenario.
++++
+## 2022-08-29
+
+### Azure Machine Learning SDK for Python v1.45.0
+ + **azureml-automl-runtime**
+ + Fixed a bug where the sample_weight column was not properly validated.
+ + Added rolling_forecast() public method to the forecasting pipeline wrappers for all supported forecasting models. This method replaces the deprecated rolling_evaluation() method.
+ + Fixed an issue where AutoML Regression tasks may fall back to train-valid split for model evaluation, when CV would have been a more appropriate choice.
+ + **azureml-core**
+ + New cloud configuration suffix added, "aml_discovery_endpoint".
+ + Updated the vendored azure-storage package from version 2 to version 12.
+ + **azureml-mlflow**
+ + New cloud configuration suffix added, "aml_discovery_endpoint".
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package and curated images to raiwidgets and responsibleai 0.21.0
+ + **azureml-sdk**
+ + The azureml-sdk package now allow Python 3.9.
++
+## 2022-08-01
+
+### Azure Machine Learning SDK for Python v1.44.0
+
+ + **azureml-automl-dnn-nlp**
+ + Weighted accuracy and Matthews correlation coefficient (MCC) will no longer be a metric displayed on calculated metrics for NLP Multilabel classification.
+ + **azureml-automl-dnn-vision**
+ + Raise user error when invalid annotation format is provided
+ + **azureml-cli-common**
+ + Updated the v1 CLI description
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Fixed the "Failed to calculate TCN metrics." issues caused for TCNForecaster when different timeseries in the validation dataset have different lengths.
+ + Added auto timeseries ID detection for DNN forecasting models like TCNForecaster.
+ + Fixed a bug with the Forecast TCN model where validation data could be corrupted in some circumstances when the user provided the validation set.
+ + **azureml-core**
+ + Allow setting a timeout_seconds parameter when downloading artifacts from a Run
+ + Warning message added - Azure ML CLI v1 is getting retired on 30 Sep 2025. Users are recommended to adopt CLI v2.
+ + Fix submission to non-AmlComputes throwing exceptions.
+ + Added docker context support for environments
+ + **azureml-interpret**
+ + Increase numpy version for AutoML packages
+ + **azureml-pipeline-core**
+ + Fix regenerate_outputs=True not taking effect when submit pipeline.
+ + **azureml-train-automl-runtime**
+ + Increase numpy version for AutoML packages
+ + Enable code generation for vision and nlp
+ + Original columns on which grains are created are added as part of predictions.csv
+
+## 2022-07-21
+
+### Announcing end of support for Python 3.6 in AzureML SDK v1 packages
+++ **Feature deprecation**
+ + **Deprecate Python 3.6 as a supported runtime for SDK v1 packages**
+ + On December 05, 2022, AzureML will deprecate Python 3.6 as a supported runtime, formally ending our Python 3.6 support for SDK v1 packages.
+ + From the deprecation date of December 05, 2022, AzureML will no longer apply security patches and other updates to the Python 3.6 runtime used by AzureML SDK v1 packages.
+ + The existing AzureML SDK v1 packages with Python 3.6 still will continue to run. However, AzureML strongly recommends that you migrate your scripts and dependencies to a supported Python runtime version so that you continue to receive security patches and remain eligible for technical support.
+ + We recommend using Python 3.8 version as a runtime for AzureML SDK v1 packages.
+ + In addition, AzureML SDK v1 packages using Python 3.6 will no longer be eligible for technical support.
+ + If you have any questions, contact us through AML Support.
+
+## 2022-06-27
+
+ + **azureml-automl-dnn-nlp**
+ + Remove duplicate labels column from multi-label predictions
+ + **azureml-contrib-automl-pipeline-steps**
+ + Many Models now provide the capability to generate prediction output in csv format as well. - Many Models prediction will now include column names in the output file in case of **csv** file format.
+ + **azureml-core**
+ + ADAL authentication is now deprecated and all authentication classes now use MSAL authentication. Please install azure-cli>=2.30.0 to utilize MSAL based authentication when using AzureCliAuthentication class.
+ + Added fix to force environment registration when `Environment.build(workspace)`. The fix solves confusion of the latest environment built instead of the asked one when environment is cloned or inherited from another instance.
+ + SDK warning message to restart Compute Instance before May 31, 2022, if it was created before September 19, 2021
+ + **azureml-interpret**
+ + Updated azureml-interpret package to interpret-community 0.26.*
+ + In the azureml-interpret package, add ability to get raw and engineered feature names from scoring explainer. Also, add example to the scoring notebook to get feature names from the scoring explainer and add documentation about raw and engineered feature names.
+ + **azureml-mlflow**
+ + azureml-core as a dependency of azureml-mlflow has been removed. - MLflow projects and local deployments will require azureml-core and needs to be installed separately.
+ + Adding support for creating endpoints and deploying to them via the MLflow client plugin.
+ + **azureml-responsibleai**
+ + Updated azureml-responsibleai package and environment images to latest responsibleai and raiwidgets 0.19.0 release
+ + **azureml-train-automl-client**
+ + Now OutputDatasetConfig is supported as the input of the MM/HTS pipeline builder. The mappings are: 1) OutputTabularDatasetConfig -> treated as unpartitioned tabular dataset. 2) OutputFileDatasetConfig -> treated as filed dataset.
+ + **azureml-train-automl-runtime**
+ + Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
+ + Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature is not supported when TCN is enabled.
+ + Forecasting Parameters in Many Models and Hierarchical Time Series can now be passed via object rather than using individual parameters in dictionary.
+ + Enabled forecasting model endpoints with quantiles support to be consumed in Power BI.
+ + Updated AutoML scipy dependency upper bound to 1.5.3 from 1.5.2
+
+## 2022-04-25
+
+### Azure Machine Learning SDK for Python v1.41.0
+
+**Breaking change warning**
+
+This breaking change comes from the June release of `azureml-inference-server-http`. In the `azureml-inference-server-http` June release (v0.9.0), Python 3.6 support will be dropped. Since `azureml-defaults` depends on `azureml-inference-server-http`, this change will be propagated to `azureml-defaults`. If you are not using `azureml-defaults` for inference, feel free to use `azureml-core` or any other AzureML SDK packages directly instead of install `azureml-defaults`.
+
+ + **azureml-automl-dnn-nlp**
+ + Turning on long range text feature by default.
+ + **azureml-automl-dnn-vision**
+ + Changing the ObjectAnnotation Class type from object to "dataobject".
+ + **azureml-core**
+ + This release updates the Keyvault class used by customers to enable them to provide the keyvault content type when creating a secret using the SDK. This release also updates the SDK to include a new function that enables customers to retrieve the value of the content type from a specific secret.
+ + **azureml-interpret**
+ + updated azureml-interpret package to interpret-community 0.25.0
+ + **azureml-pipeline-core**
+ + Do not print run detail anymore if `pipeline_run.wait_for_completion` with `show_output=False`
+ + **azureml-train-automl-runtime**
+ + Fixes a bug that would cause code generation to fail when the azureml-contrib-automl-dnn-forecasting package is present in the training environment.
+ + Fix error when using a test dataset without a label column with AutoML Model Testing.
+
+## 2022-03-28
+
+### Azure Machine Learning SDK for Python v1.40.0
+ + **azureml-automl-dnn-nlp**
+ + We're making the Long Range Text feature optional and only if the customers explicitly opt in for it, using the kwarg "enable_long_range_text"
+ + Adding data validation layer for multi-class classification scenario which leverages the same base class as multilabel for common validations, and a derived class for additional task specific data validation checks.
+ + **azureml-automl-dnn-vision**
+ + Fixing KeyError while computing class weights.
+ + **azureml-contrib-reinforcementlearning**
+ + SDK warning message for upcoming deprecation of RL service
+ + **azureml-core**
+ + * Return logs for runs that went through our new runtime when calling any of the get logs function on the run object, including `run.get_details`, `run.get_all_logs`, etc.
+ + Added experimental method Datastore.register_onpremises_hdfs to allow users to create datastores pointing to on-premises HDFS resources.
+ + Updating the CLI documentation in the help command
+ + **azureml-interpret**
+ + For azureml-interpret package, remove shap pin with packaging update. Remove numba and numpy pin after CE env update.
+ + **azureml-mlflow**
+ + Bugfix for MLflow deployment client run_local failing when config object wasn't provided.
+ + **azureml-pipeline-steps**
+ + Remove broken link of deprecated pipeline EstimatorStep
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package to raiwidgets and responsibleai 0.17.0 release
+ + **azureml-train-automl-runtime**
+ + Code generation for automated ML now supports ForecastTCN models (experimental).
+ + Models created via code generation will now have all metrics calculated by default (except normalized mean absolute error, normalized median absolute error, normalized RMSE, and normalized RMSLE in the case of forecasting models). The list of metrics to be calculated can be changed by editing the return value of `get_metrics_names()`. Cross validation will now be used by default for forecasting models created via code generation..
+ + **azureml-training-tabular**
+ + The list of metrics to be calculated can be changed by editing the return value of `get_metrics_names()`. Cross validation will now be used by default for forecasting models created via code generation.
+ + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
+
+## 2022-02-28
+
+### Azure Machine Learning SDK for Python v1.39.0
+ + **azureml-automl-core**
+ + Fix incorrect form displayed in PBI for integration with AutoML regression models
+ + Adding min-label-classes check for both classification tasks (multi-class and multi-label). It will throw an error for the customer's run if the unique number of classes in the input training dataset is fewer than 2. It is meaningless to run classification on fewer than two classes.
+ + **azureml-automl-runtime**
+ + Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
+ + AutoML training now supports numpy version 1.8.
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Fixed a bug in the TCNForecaster model where not all training data would be used when cross-validation settings were provided.
+ + TCNForecaster wrapper's forecast method that was corrupting inference-time predictions. Also fixed an issue where the forecast method would not use the most recent context data in train-valid scenarios.
+ + **azureml-interpret**
+ + For azureml-interpret package, remove shap pin with packaging update. Remove numba and numpy pin after CE env update.
+ + **azureml-responsibleai**
+ + azureml-responsibleai package to raiwidgets and responsibleai 0.17.0 release
+ + **azureml-synapse**
+ + Fix the issue that magic widget is disappeared.
+ + **azureml-train-automl-runtime**
+ + Updating AutoML dependencies to support Python 3.8. This change will break compatibility with models trained with SDK 1.37 or below due to newer Pandas interfaces being saved in the model.
+ + AutoML training now supports numpy version 1.19
+ + Fix AutoML reset index logic for ensemble models in automl_setup_model_explanations API
+ + In AutoML, use lightgbm surrogate model instead of linear surrogate model for sparse case after latest lightgbm version upgrade
+ + All internal intermediate artifacts that are produced by AutoML are now stored transparently on the parent run (instead of being sent to the default workspace blob store). Users should be able to see the artifacts that AutoML generates under the `outputs/` directory on the parent run.
+
+
+## 2022-01-24
+
+### Azure Machine Learning SDK for Python v1.38.0
+ + **azureml-automl-core**
+ + Tabnet Regressor and Tabnet Classifier support in AutoML
+ + Saving data transformer in parent run outputs, which can be reused to produce same featurized dataset which was used during the experiment run
+ + Supporting getting primary metrics for Forecasting task in get_primary_metrics API.
+ + Renamed second optional parameter in v2 scoring scripts as GlobalParameters
+ + **azureml-automl-dnn-vision**
+ + Added the scoring metrics in the metrics UI
+ + **azureml-automl-runtime**
+ + Bug fix for cases where the algorithm name for NimbusML models may show up as empty strings, either on the ML Studio, or on the console outputs.
+ + **azureml-core**
+ + Added parameter blobfuse_enabled in azureml.core.webservice.aks.AksWebservice.deploy_configuration. When this parameter is true, models and scoring files will be downloaded with blobfuse instead of the blob storage API.
+ + **azureml-interpret**
+ + Updated azureml-interpret to interpret-community 0.24.0
+ + In azureml-interpret update scoring explainer to support latest version of lightgbm with sparse TreeExplainer
+ + Update azureml-interpret to interpret-community 0.23.*
+ + **azureml-pipeline-core**
+ + Add note in pipelinedata, recommend user to use pipeline output dataset instead.
+ + **azureml-pipeline-steps**
+ + Add `environment_variables` to ParallelRunConfig, runtime environment variables can be passed by this parameter and will be set on the process where the user script is executed.
+ + **azureml-train-automl-client**
+ + Tabnet Regressor and Tabnet Classifier support in AutoML
+ + **azureml-train-automl-runtime**
+ + Saving data transformer in parent run outputs, which can be reused to produce same featurized dataset which was used during the experiment run
+ + **azureml-train-core**
+ + Enable support for early termination for Bayesian Optimization in Hyperdrive
+ + Bayesian and GridParameterSampling objects can now pass on properties
++
+## 2021-12-13
+
+### Azure Machine Learning SDK for Python v1.37.0
++ **Breaking changes**
+ + **azureml-core**
+ + Starting in version 1.37.0, AzureML SDK uses MSAL as the underlying authentication library. MSAL uses Azure Active Directory (Azure AD) v2.0 authentication flow to provide more functionality and increases security for token cache. For more details, see [Overview of the Microsoft Authentication Library (MSAL)](../../active-directory/develop/msal-overview.md).
+ + Update AML SDK dependencies to the latest version of Azure Resource Management Client Library for Python (azure-mgmt-resource>=15.0.0,<20.0.0) & adopt track2 SDK.
+ + Starting in version 1.37.0, azure-ml-cli extension should be compatible with the latest version of Azure CLI >=2.30.0.
+ + When using Azure CLI in a pipeline, like as Azure DevOps, ensure all tasks/stages are using versions of Azure CLI above v2.30.0 for MSAL-based Azure CLI. Azure CLI 2.30.0 is not backward compatible with prior versions and throws an error when using incompatible versions. To use Azure CLI credentials with AzureML SDK, Azure CLI should be installed as pip package.
+
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Removed instance types from the attach workflow for Kubernetes compute. Instance types can now directly be set up in the Kubernetes cluster. For more details, please visit aka.ms/amlarc/doc.
+ + **azureml-interpret**
+ + updated azureml-interpret to interpret-community 0.22.*
+ + **azureml-pipeline-steps**
+ + Fixed a bug where the experiment "placeholder" might be created on submission of a Pipeline with an AutoMLStep.
+ + **azureml-responsibleai**
+ + update azureml-responsibleai and compute instance environment to responsibleai and raiwidgets 0.15.0 release
+ + update azureml-responsibleai package to latest responsibleai 0.14.0.
+ + **azureml-tensorboard**
+ + You can now use `Tensorboard(runs, use_display_name=True)` to mount the TensorBoard logs to folders named after the `run.display_name/run.id` instead of `run.id`.
+ + **azureml-train-automl-client**
+ + Fixed a bug where the experiment "placeholder" might be created on submission of a Pipeline with an AutoMLStep.
+ + Update AutoMLConfig test_data and test_size docs to reflect preview status.
+ + **azureml-train-automl-runtime**
+ + Added new feature that allows users to pass time series grains with one unique value.
+ + In certain scenarios, an AutoML model can predict NaNs. The rows that correspond to these NaN predictions will be removed from test datasets and predictions before computing metrics in test runs.
++
+## 2021-11-08
+
+### Azure Machine Learning SDK for Python v1.36.0
++ **Bug fixes and improvements**
+ + **azureml-automl-dnn-vision**
+ + Cleaned up minor typos on some error messages.
+ + **azureml-contrib-reinforcementlearning**
+ + Submitting Reinforcement Learning runs that use simulators is no longer supported.
+ + **azureml-core**
+ + Added support for partitioned premium blob.
+ + Specifying non-public clouds for Managed Identity authentication is no longer supported.
+ + User can migrate AKS web service to online endpoint and deployment which is managed by CLI (v2).
+ + The instance type for training jobs on Kubernetes compute targets can now be set via a RunConfiguration property: run_config.kubernetescompute.instance_type.
+ + **azureml-defaults**
+ + Removed redundant dependencies like gunicorn and werkzeug
+ + **azureml-interpret**
+ + azureml-interpret package updated to 0.21.* version of interpret-community
+ + **azureml-pipeline-steps**
+ + Deprecate MpiStep in favor of using CommandStep for running ML training (including distributed training) in pipelines.
+ + **azureml-train-automl-rutime**
+ + Update the AutoML model test predictions output format docs.
+ + Added docstring descriptions for Naive, SeasonalNaive, Average, and SeasonalAverage forecasting model.
+ + Featurization summary is now stored as an artifact on the run (check for a file named 'featurization_summary.json' under the outputs folder)
+ + Enable categorical indicators support for Tabnet Learner.
+ + Add downsample parameter to automl_setup_model_explanations to allow users to get explanations on all data without downsampling by setting this parameter to be false.
+
+
+## 2021-10-11
+
+### Azure Machine Learning SDK for Python v1.35.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Enable binary metrics calculation
+ + **azureml-contrib-fairness**
+ + Improve error message on failed dashboard download
+ + **azureml-core**
+ + Bug in specifying non-public clouds for Managed Identity authentication has been resolved.
+ + Dataset.File.upload_directory() and Dataset.Tabular.register_pandas_dataframe() experimental flags are now removed.
+ + Experimental flags are now removed in partition_by() method of TabularDataset class.
+ + **azureml-pipeline-steps**
+ + Experimental flags are now removed for the `partition_keys` parameter of the ParallelRunConfig class.
+ + **azureml-interpret**
+ + azureml-interpret package updated to intepret-community 0.20.*
+ + **azureml-mlflow**
+ + Made it possible to log artifacts and images with MLflow using subdirectories
+ + **azureml-responsibleai**
+ + Improve error message on failed dashboard download
+ + **azureml-train-automl-client**
+ + Added support for computer vision tasks such as Image Classification, Object Detection and Instance Segmentation. Detailed documentation can be found at: [Set up AutoML to train computer vision models with Python (v1)](how-to-auto-train-image-models-v1.md).
+ + Enable binary metrics calculation
+ + **azureml-train-automl-runtime**
+ + Add TCNForecaster support to model test runs.
+ + Update the model test predictions.csv output format. The output columns now include the original target values and the features which were passed in to the test run. This can be turned off by setting `test_include_predictions_only=True` in `AutoMLConfig` or by setting `include_predictions_only=True` in `ModelProxy.test()`. If the user has requested to only include predictions then the output format looks like (forecasting is the same as regression): Classification => [predicted values] [probabilities] Regression => [predicted values] else (default): Classification => [original test data labels] [predicted values] [probabilities] [features] Regression => [original test data labels] [predicted values] [features] The `[predicted values]` column name = `[label column name] + "_predicted"`. The `[probabilities]` column names = `[class name] + "_predicted_proba"`. If no target column was passed in as input to the test run, then `[original test data labels]` will not be in the output.
+
+## 2021-09-07
+
+### Azure Machine Learning SDK for Python v1.34.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Added support for re-fitting a previously trained forecasting pipeline.
+ + Added ability to get predictions on the training data (in-sample prediction) for forecasting.
+ + **azureml-automl-runtime**
+ + Add support to return predicted probabilities from a deployed endpoint of an AutoML classifier model.
+ + Added a forecasting option for users to specify that all predictions should be integers.
+ + Removed the target column name from being part of model explanation feature names for local experiments with training_data_label_column_name
+ + as dataset inputs.
+ + Added support for re-fitting a previously trained forecasting pipeline.
+ + Added ability to get predictions on the training data (in-sample prediction) for forecasting.
+ + **azureml-core**
+ + Added support to set stream column type, mount and download stream columns in tabular dataset.
+ + New optional fields added to Kubernetes.attach_configuration(identity_type=None, identity_ids=None) which allow attaching KubernetesCompute with either SystemAssigned or UserAssigned identity. New identity fields will be included when calling print(compute_target) or compute_target.serialize(): identity_type, identity_id, principal_id, and tenant_id/client_id.
+ + **azureml-dataprep**
+ + Added support to set stream column type for tabular dataset. added support to mount and download stream columns in tabular dataset.
+ + **azureml-defaults**
+ + The dependency `azureml-inference-server-http==0.3.1` has been added to `azureml-defaults`.
+ + **azureml-mlflow**
+ + Allow pagination of list_experiments API by adding `max_results` and `page_token` optional params. For documentation, see MLflow official docs.
+ + **azureml-sdk**
+ + Replaced dependency on deprecated package(azureml-train) inside azureml-sdk.
+ + Add azureml-responsibleai to azureml-sdk extras
+ + **azureml-train-automl-client**
+ + Expose the `test_data` and `test_size` parameters in `AutoMLConfig`. These parameters can be used to automatically start a test run after the model
+ + training phase has been completed. The test run will compute predictions using the best model and will generate metrics given these predictions.
+
+## 2021-08-24
+
+### Azure Machine Learning Experimentation User Interface
+ + **Run Delete**
+ + Run Delete is a new functionality that allows users to delete one or multiple runs from their workspace.
+ + This functionality can help users reduce storage costs and manage storage capacity by regularly deleting runs and experiments from the UI directly.
+ + **Batch Cancel Run**
+ + Batch Cancel Run is new functionality that allows users to select one or multiple runs to cancel from their run list.
+ + This functionality can help users cancel multiple queued runs and free up space on their cluster.
+
+## 2021-08-18
+
+### Azure Machine Learning Experimentation User Interface
+ + **Run Display Name**
+ + The Run Display Name is a new, editable and optional display name that can be assigned to a run.
+ + This name can help with more effectively tracking, organizing and discovering the runs.
+ + The Run Display Name is defaulted to an adjective_noun_guid format (Example: awesome_watch_2i3uns).
+ + This default name can be edited to a more customizable name. This can be edited from the Run details page in the Azure Machine Learning studio user interface.
+
+## 2021-08-02
+
+### Azure Machine Learning SDK for Python v1.33.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Improved error handling around XGBoost model retrieval.
+ + Added possibility to convert the predictions from float to integers for forecasting and regression tasks.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-automl-runtime**
+ + Added possibility to convert the predictions from float to integers for forecasting and regression tasks.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-contrib-automl-pipeline-steps**
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Add Tabular dataset support for inferencing
+ + Custom path can be specified for the inference data
+ + **azureml-contrib-reinforcementlearning**
+ + Some properties in `azureml.core.environment.DockerSection` are deprecated, such as `shm_size` property used by Ray workers in reinforcement learning jobs. This property can now be specified in `azureml.contrib.train.rl.WorkerConfiguration` instead.
+ + **azureml-core**
+ + Fixed a hyperlink in `ScriptRunConfig.distributed_job_config` documentation
+ + Azure Machine Learning compute clusters can now be created in a location different from the location of the workspace. This is useful for maximizing idle capacity allocation and managing quota utilization across different locations without having to create more workspaces just to use quota and create a compute cluster in a particular location. For more information, see [Create an Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md?tabs=python).
+ + Added display_name as a mutable name field of Run object.
+ + Dataset from_files now supports skipping of data extensions for large input data
+ + **azureml-dataprep**
+ + Fixed a bug where to_dask_dataframe would fail because of a race condition.
+ + Dataset from_files now supports skipping of data extensions for large input data
+ + **azureml-defaults**
+ + We are removing the dependency azureml-model-management-sdk==1.0.1b6.post1 from azureml-defaults.
+ + **azureml-interpret**
+ + updated azureml-interpret to interpret-community 0.19.*
+ + **azureml-pipeline-core**
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + **azureml-train-automl-client**
+ + Switch to using blob store for caching in Automated ML.
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Improved error handling around XGBoost model retrieval.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
+ + **azureml-train-automl-runtime**
+ + Switch to using blob store for caching in Automated ML.
+ + Hierarchical timeseries (HTS) is enabled for forecasting tasks through pipelines.
+ + Updated default value for enable_early_stopping in AutoMLConfig to True.
++
+## 2021-07-06
+
+### Azure Machine Learning SDK for Python v1.32.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Expose diagnose workspace health in SDK/CLI
+ + **azureml-defaults**
+ + Added `opencensus-ext-azure==1.0.8` dependency to azureml-defaults
+ + **azureml-pipeline-core**
+ + Updated the AutoMLStep to use prebuilt images when the environment for job submission matches the default environment
+ + **azureml-responsibleai**
+ + New error analysis client added to upload, download and list error analysis reports
+ + Ensure `raiwidgets` and `responsibleai` packages are version synchronized
+ + **azureml-train-automl-runtime**
+ + Set the time allocated to dynamically search across various featurization strategies to a maximum of one-fourth of the overall experiment timeout
++
+## 2021-06-21
+
+### Azure Machine Learning SDK for Python v1.31.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Improved documentation for platform property on Environment class
+ + Changed default AML Compute node scale down time from 120 seconds to 1800 seconds
+ + Updated default troubleshooting link displayed on the portal for troubleshooting failed runs to: https://aka.ms/azureml-run-troubleshooting
+ + **azureml-automl-runtime**
+ + Data Cleaning: Samples with target values in [None, "", "nan", np.nan] will be dropped prior to featurization and/or model training
+ + **azureml-interpret**
+ + Prevent flush task queue error on remote AzureML runs that use ExplanationClient by increasing timeout
+ + **azureml-pipeline-core**
+ + Add jar parameter to synapse step
+ + **azureml-train-automl-runtime**
+ + Fix high cardinality guardrails to be more aligned with docs
+
+## 2021-06-07
+
+### Azure Machine Learning SDK for Python v1.30.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Pin dependency `ruamel-yaml` to < 0.17.5 as a breaking change was released in 0.17.5.
+ + `aml_k8s_config` property is being replaced with `namespace`, `default_instance_type`, and `instance_types` parameters for `KubernetesCompute` attach.
+ + Workspace sync keys was changed to a long running operation.
+ + **azureml-automl-runtime**
+ + Fixed problems where runs with big data may fail with `Elements of y_test cannot be NaN`.
+ + **azureml-mlflow**
+ + MLFlow deployment plugin bugfix for models with no signature.
+ + **azureml-pipeline-steps**
+ + ParallelRunConfig: update doc for process_count_per_node.
+ + **azureml-train-automl-runtime**
+ + Support for custom defined quantiles during MM inference
+ + Support for forecast_quantiles during batch inference.
+ + **azureml-contrib-automl-pipeline-steps**
+ + Support for custom defined quantiles during MM inference
+ + Support for forecast_quantiles during batch inference.
+
+## 2021-05-25
+
+### Announcing the CLI (v2) for Azure Machine Learning
+
+The `ml` extension to the Azure CLI is the next-generation interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle. [Install and set up the CLI (v2)](../how-to-configure-cli.md).
+
+### Azure Machine Learning SDK for Python v1.29.0
++ **Bug fixes and improvements**
+ + **Breaking changes**
+ + Dropped support for Python 3.5.
+ + **azureml-automl-runtime**
+ + Fixed a bug where the STLFeaturizer failed if the time-series length was shorter than the seasonality. This error manifested as an IndexError. The case is handled now without error, though the seasonal component of the STL will just consist of zeros in this case.
+ + **azureml-contrib-automl-dnn-vision**
+ + Added a method for batch inferencing with file paths.
+ + **azureml-contrib-gbdt**
+ + The azureml-contrib-gbdt package has been deprecated and might not receive future updates and will be removed from the distribution altogether.
+ + **azureml-core**
+ + Corrected explanation of parameter create_if_not_exists in Datastore.register_azure_blob_container.
+ + Added sample code to DatasetConsumptionConfig class.
+ + Added support for step as an alternative axis for scalar metric values in run.log()
+ + **azureml-dataprep**
+ + Limit partition size accepted in `_with_partition_size()` to 2GB
+ + **azureml-interpret**
+ + update azureml-interpret to the latest interpret-core package version
+ + Dropped support for SHAP DenseData, which has been deprecated in SHAP 0.36.0.
+ + Enable `ExplanationClient` to upload to a user specified datastore.
+ + **azureml-mlflow**
+ + Move azureml-mlflow to mlflow-skinny to reduce the dependency footprint while maintaining full plugin support
+ + **azureml-pipeline-core**
+ + PipelineParameter code sample is updated in the reference doc to use correct parameter.
++
+## 2021-05-10
+
+### Azure Machine Learning SDK for Python v1.28.0
++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + Improved AutoML Scoring script to make it consistent with designer
+ + Patch bug where forecasting with the Prophet model would throw a "missing column" error if trained on an earlier version of the SDK.
+ + Added the ARIMAX model to the public-facing, forecasting-supported model lists of the AutoML SDK. Here, ARIMAX is a regression with ARIMA errors and a special case of the transfer function models developed by Box and Jenkins. For a discussion of how the two approaches are different, see [The ARIMAX model muddle](https://robjhyndman.com/hyndsight/arimax/). Unlike the rest of the multivariate models that use auto-generated, time-dependent features (hour of the day, day of the year, and so on) in AutoML, this model uses only features that are provided by the user, and it makes interpreting coefficients easy.
+ + **azureml-contrib-dataset**
+ + Updated documentation description with indication that libfuse should be installed while using mount.
+ + **azureml-core**
+ + Default CPU curated image is now mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04. Default GPU image is now mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn8-ubuntu18.04
+ + Run.fail() is now deprecated, use Run.tag() to mark run as failed or use Run.cancel() to mark the run as canceled.
+ + Updated documentation with a note that libfuse should be installed when mounting a file dataset.
+ + Add experimental register_dask_dataframe() support to tabular dataset.
+ + Support DatabricksStep with Azure Blob/ADL-S as inputs/outputs and expose parameter permit_cluster_restart to let customer decide whether AML can restart cluster when i/o access configuration need to be added into cluster
+ + **azureml-dataset-runtime**
+ + azureml-dataset-runtime now supports versions of pyarrow < 4.0.0
+ + **azureml-mlflow**
+ + Added support for deploying to AzureML via our MLFlow plugin.
+ + **azureml-pipeline-steps**
+ + Support DatabricksStep with Azure Blob/ADL-S as inputs/outputs and expose parameter permit_cluster_restart to let customer decide whether AML can restart cluster when i/o access configuration need to be added into cluster
+ + **azureml-synapse**
+ + Enable audience in msi authentication
+ + **azureml-train-automl-client**
+ + Added changed link for compute target doc
++
+## 2021-04-19
+
+### Azure Machine Learning SDK for Python v1.27.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + Added the ability to override the default timeout value for artifact uploading via the "AZUREML_ARTIFACTS_DEFAULT_TIMEOUT" environment variable.
+ + Fixed a bug where docker settings in Environment object on ScriptRunConfig are not respected.
+ + Allow partitioning a dataset when copying it to a destination.
+ + Added a custom mode to the OutputDatasetConfig to enable passing created Datasets in pipelines through a link function. These support enhancements made to enable Tabular Partitioning for PRS.
+ + Added a new KubernetesCompute compute type to azureml-core.
+ + **azureml-pipeline-core**
+ + Adding a custom mode to the OutputDatasetConfig and enabling a user to pass through created Datasets in pipelines through a link function. File path destinations support placeholders. These support the enhancements made to enable Tabular Partitioning for PRS.
+ + Addition of new KubernetesCompute compute type to azureml-core.
+ + **azureml-pipeline-steps**
+ + Addition of new KubernetesCompute compute type to azureml-core.
+ + **azureml-synapse**
+ + Update spark UI url in widget of azureml synapse
+ + **azureml-train-automl-client**
+ + The STL featurizer for the forecasting task now uses a more robust seasonality detection based on the frequency of the time series.
+ + **azureml-train-core**
+ + Fixed bug where docker settings in Environment object are not respected.
+ + Addition of new KubernetesCompute compute type to azureml-core.
++
+## 2021-04-05
+
+### Azure Machine Learning SDK for Python v1.26.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed an issue where Naive models would be recommended in AutoMLStep runs and fail with lag or rolling window features. These models will not be recommended when target lags or target rolling window size are set.
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-core**
+ + Added HDFS mode in documentation.
+ + Added support to understand File Dataset partitions based on glob structure.
+ + Added support for update container registry associated with AzureML Workspace.
+ + Deprecated Environment attributes under the DockerSection - "enabled", "shared_volume" and "arguments" are a part of DockerConfiguration in RunConfiguration now.
+ + Updated Pipeline CLI clone documentation
+ + Updated portal URIs to include tenant for authentication
+ + Removed experiment name from run URIs to avoid redirects
+ + Updated experiment URO to use experiment ID.
+ + Bug fixes for attaching remote compute with AzureML CLI.
+ + Updated portal URIs to include tenant for authentication.
+ + Updated experiment URI to use experiment ID.
+ + **azureml-interpret**
+ + azureml-interpret updated to use interpret-community 0.17.0
+ + **azureml-opendatasets**
+ + Input start date and end date type validation and error indication if it's not datetime type.
+ + **azureml-parallel-run**
+ + [Experimental feature] Add `partition_keys` parameter to ParallelRunConfig, if specified, the input dataset(s) would be partitioned into mini-batches by the keys specified by it. It requires all input datasets to be partitioned dataset.
+ + **azureml-pipeline-steps**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ + Deprecate EstimatorStep in favor of using CommandStep for running ML training (including distributed training) in pipelines.
+ + **azureml-sdk**
+ + Update python_requires to < 3.9 for azureml-sdk
+ + **azureml-train-automl-client**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-train-core**
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Use Azure Open Datasets for MNIST dataset
+ + Hyperdrive error messages have been updated.
++
+## 2021-03-22
+
+### Azure Machine Learning SDK for Python v1.25.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-core**
+ + Starts to support updating container registry for workspace in SDK and CLI
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Updated Pipeline CLI clone documentation
+ + Updated portal URIs to include tenant for authentication
+ + Removed experiment name from run URIs to avoid redirects
+ + Updated experiment URO to use experiment ID.
+ + Bug fixes for attaching remote compute using az CLI
+ + Updated portal URIs to include tenant for authentication.
+ + Added support to understand File Dataset partitions based on glob structure.
+ + **azureml-interpret**
+ + azureml-interpret updated to use interpret-community 0.17.0
+ + **azureml-opendatasets**
+ + Input start date and end date type validation and error indication if it's not datetime type.
+ + **azureml-pipeline-core**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + **azureml-pipeline-steps**
+ + Bugfix - supporting path_on_compute while passing dataset configuration as download.
+ + Deprecate RScriptStep in favor of using CommandStep for running R scripts in pipelines.
+ + Deprecate EstimatorStep in favor of using CommandStep for running ML training (including distributed training) in pipelines.
+ + **azureml-train-automl-runtime**
+ + Changed console output when submitting an AutoML run to show a portal link to the run.
+ + **azureml-train-core**
+ + Deprecated DockerSection's 'enabled', 'shared_volume', and 'arguments' attributes in favor of using DockerConfiguration with ScriptRunConfig.
+ + Use Azure Open Datasets for MNIST dataset
+ + Hyperdrive error messages have been updated.
++
+## 2021-03-31
+### Azure Machine Learning studio Notebooks Experience (March Update)
++ **New features**
+ + Render CSV/TSV. Users will be able to render and TSV/CSV file in a grid format for easier data analysis.
+ + SSO Authentication for Compute Instance. Users can now easily authenticate any new compute instances directly in the Notebook UI, making it easier to authenticate and use Azure SDKs directly in AzureML.
+ + Compute Instance Metrics. Users will be able to view compute metrics like CPU usage and memory via terminal.
+ + File Details. Users can now see file details including the last modified time, and file size by clicking the 3 dots beside a file.
+++ **Bug fixes and improvements**
+ + Improved page load times.
+ + Improved performance.
+ + Improved speed and kernel reliability.
+ + Gain vertical real estate by permanently moving Notebook file pane up
+ + Links are now clickable in Terminal
+ + Improved Intellisense performance
++
+## 2021-03-08
+
+### Azure Machine Learning SDK for Python v1.24.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Removed backwards compatible imports from `azureml.automl.core.shared`. Module not found errors in the `azureml.automl.core.shared` namespace can be resolved by importing from `azureml.automl.runtime.shared`.
+ + **azureml-contrib-automl-dnn-vision**
+ + Exposed object detection yolo model.
+ + **azureml-contrib-dataset**
+ + Added functionality to filter Tabular Datasets by column values and File Datasets by metadata.
+ + **azureml-contrib-fairness**
+ + Include JSON schema in wheel for `azureml-contrib-fairness`
+ + **azureml-contrib-mir**
+ + With setting show_output to True when deploy models, inference configuration and deployment configuration will be replayed before sending the request to server.
+ + **azureml-core**
+ + Added functionality to filter Tabular Datasets by column values and File Datasets by metadata.
+ + Previously, it was possibly for users to create provisioning configurations for ComputeTarget's that did not satisfy the password strength requirements for the `admin_user_password` field (i.e., that they must contain at least 3 of the following: 1 lowercase letter, 1 uppercase letter, 1 digit, and 1 special character from the following set: ``\`~!@#$%^&*()=+_[]{}|;:./'",<>?``). If the user created a configuration with a weak password and ran a job using that configuration, the job would fail at runtime. Now, the call to `AmlCompute.provisioning_configuration` will throw a `ComputeTargetException` with an accompanying error message explaining the password strength requirements.
+ + Additionally, it was also possible in some cases to specify a configuration with a negative number of maximum nodes. It is no longer possible to do this. Now, `AmlCompute.provisioning_configuration` will throw a `ComputeTargetException` if the `max_nodes` argument is a negative integer.
+ + With setting show_output to True when deploy models, inference configuration and deployment configuration will be displayed.
+ + With setting show_output to True when wait for the completion of model deployment, the progress of deployment operation will be displayed.
+ + Allow customer specified AzureML auth config directory through environment variable: AZUREML_AUTH_CONFIG_DIR
+ + Previously, it was possible to create a provisioning configuration with the minimum node count less than the maximum node count. The job would run but fail at runtime. This bug has now been fixed. If you now try to create a provisioning configuration with `min_nodes < max_nodes` the SDK will raise a `ComputeTargetException`.
+ + **azureml-interpret**
+ + fix explanation dashboard not showing aggregate feature importances for sparse engineered explanations
+ + optimized memory usage of ExplanationClient in azureml-interpret package
+ + **azureml-train-automl-client**
+ + Fixed show_output=False to return control to the user when running using spark.
+
+## 2021-02-28
+### Azure Machine Learning studio Notebooks Experience (February Update)
++ **New features**
+ + [Native Terminal (GA)](../how-to-access-terminal.md). Users will now have access to an integrated terminal as well as Git operation via the integrated terminal.
+ + Notebook Snippets (preview). Common Azure ML code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.
+ + [Keyboard Shortcuts](../how-to-run-jupyter-notebooks.md#useful-keyboard-shortcuts). Full parity with keyboard shortcuts available in Jupyter.
+ + Indicate Cell parameters. Shows users which cells in a notebook are parameter cells and can run parameterized notebooks via [Papermill](https://github.com/nteract/papermill) on the Compute Instance.
+ + Terminal and Kernel session
+ + Sharing Button. Users can now share any file in the Notebook file explorer by right-clicking the file and using the share button.
++++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+ + Added spinning wheel to show progress for all ongoing [Compute Instance operations](../how-to-run-jupyter-notebooks.md#status-indicators).
+ + Right click in File Explorer. Right-clicking any file will now open file operations.
++
+## 2021-02-16
+
+### Azure Machine Learning SDK for Python v1.23.0
++ **Bug fixes and improvements**
+ + **azureml-core**
+ + [Experimental feature] Add support to link synapse workspace into AML as a linked service
+ + [Experimental feature] Add support to attach synapse spark pool into AML as a compute
+ + [Experimental feature] Add support for identity based data access. Users can register datastore or datasets without providing credentials. In such case, users' Azure AD token or managed identity of compute target will be used for authentication. To learn more, see [Connect to storage by using identity-based data access](./how-to-identity-based-data-access.md).
+ + **azureml-pipeline-steps**
+ + [Experimental feature] Add support for [SynapseSparkStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.synapsesparkstep)
+ + **azureml-synapse**
+ + [Experimental feature] Add support of spark magic to run interactive session in synapse spark pool.
++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + In this update, we added holt winters exponential smoothing to forecasting toolbox of AutoML SDK. Given a time series, the best model is selected by [AICc (Corrected Akaike's Information Criterion)](https://otexts.com/fpp3/selecting-predictors.html#selecting-predictors) and returned.
+ + AutoML will now generate two log files instead of one. Log statements will go to one or the other depending on which process the log statement was generated in.
+ + Remove unnecessary in-sample prediction during model training with cross-validations. This may decrease model training time in some cases, especially for time-series forecasting models.
+ + **azureml-contrib-fairness**
+ + Add a JSON schema for the dashboardDictionary uploads.
+ + **azureml-contrib-interpret**
+ + azureml-contrib-interpret README is updated to reflect that package will be removed in next update after being deprecated since October, use azureml-interpret package instead
+ + **azureml-core**
+ + Previously, it was possible to create a provisioning configuration with the minimum node count less than the maximum node count. This has now been fixed. If you now try to create a provisioning configuration with `min_nodes < max_nodes` the SDK will raise a `ComputeTargetException`.
+ + Fixes bug in wait_for_completion in AmlCompute which caused the function to return control flow before the operation was actually complete
+ + Run.fail() is now deprecated, use Run.tag() to mark run as failed or use Run.cancel() to mark the run as canceled.
+ + Show error message 'Environment name expected str, {} found' when provided environment name is not a string.
+ + **azureml-train-automl-client**
+ + Fixed a bug that prevented AutoML experiments performed on Azure Databricks clusters from being canceled.
++
+## 2021-02-09
+
+### Azure Machine Learning SDK for Python v1.22.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed bug where an extra pip dependency was added to the conda yml file for vision models.
+ + **azureml-automl-runtime**
+ + Fixed a bug where classical forecasting models (e.g. AutoArima) could receive training data wherein rows with imputed target values were not present. This violated the data contract of these models. * Fixed various bugs with lag-by-occurrence behavior in the time-series lagging operator. Previously, the lag-by-occurrence operation did not mark all imputed rows correctly and so would not always generate the correct occurrence lag values. Also fixed some compatibility issues between the lag operator and the rolling window operator with lag-by-occurrence behavior. This previously resulted in the rolling window operator dropping some rows from the training data that it should otherwise use.
+ + **azureml-core**
+ + Adding support for Token Authentication by audience.
+ + Add `process_count` to [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration) to support multi-process multi-node PyTorch jobs.
+ + **azureml-pipeline-steps**
+ + [CommandStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.commandstep) now GA and no longer experimental.
+ + [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig): add argument allowed_failed_count and allowed_failed_percent to check error threshold on mini batch level. Error threshold has 3 flavors now:
+ + error_threshold - the number of allowed failed mini batch items;
+ + allowed_failed_count - the number of allowed failed mini batches;
+ + allowed_failed_percent- the percent of allowed failed mini batches.
+
+ A job will stop if exceeds any of them. error_threshold is required to keep it backward compatibility. Set the value to -1 to ignore it.
+ + Fixed whitespace handling in AutoMLStep name.
+ + ScriptRunConfig is now supported by HyperDriveStep
+ + **azureml-train-core**
+ + HyperDrive runs invoked from a ScriptRun will now be considered a child run.
+ + Add `process_count` to [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration) to support multi-process multi-node PyTorch jobs.
+ + **azureml-widgets**
+ + Add widget ParallelRunStepDetails to visualize status of a ParallelRunStep.
+ + Allows hyperdrive users to see an additional axis on the parallel coordinates chart that shows the metric value corresponding to each set of hyperparameters for each child run.
++
+ ## 2021-01-31
+### Azure Machine Learning studio Notebooks Experience (January Update)
++ **New features**
+ + Native Markdown Editor in AzureML. Users can now render and edit markdown files natively in AzureML Studio.
+ + [Run Button for Scripts (.py, .R and .sh)](../how-to-run-jupyter-notebooks.md#run-a-notebook-or-python-script). Users can easily now run Python, R and Bash script in AzureML
+ + [Variable Explorer](../how-to-run-jupyter-notebooks.md#explore-variables-in-the-notebook). Explore the contents of variables and data frames in a pop-up panel. Users can easily check data type, size, and contents.
+ + [Table of Content](../how-to-run-jupyter-notebooks.md#navigate-with-a-toc). Navigate to sections of your notebook, indicated by Markdown headers.
+ + Export your Notebook as Latex/HTML/Py. Create easy-to-share notebook files by exporting to LaTex, HTML, or .py
+ + Intellicode. ML-powered results provides an enhanced [intelligent autocompletion experience](/visualstudio/intellicode/overview).
+++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+
+
+ ## 2021-01-25
+
+### Azure Machine Learning SDK for Python v1.21.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Fixed CLI help text when using AmlCompute with UserAssigned Identity
+ + **azureml-contrib-automl-dnn-vision**
+ + Deploy and download buttons will become visible for AutoML vision runs, and models can be deployed or downloaded similar to other AutoML runs. There are two new files (scoring_file_v_1_0_0.py and conda_env_v_1_0_0.yml) which contain a script to run inferencing and a yml file to recreate the conda environment. The 'model.pth' file has also been renamed to use the '.pt' extension.
+ + **azureml-core**
+ + MSI support for azure-cli-ml
+ + User Assigned Managed Identity Support.
+ + With this change, the customers should be able to provide a user assigned identity that can be used to fetch the key from the customer key vault for encryption at rest.
+ + fix row_count=0 for the profile of very large files - fix error in double conversion for delimited values with white space padding
+ + Remove experimental flag for Output dataset GA
+ + Update documentation on how to fetch specific version of a Model
+ + Allow updating workspace for mixed mode access in case of private link
+ + Fix to remove additional registration on datastore for resume run feature
+ + Added CLI/SDK support for updating primary user assigned identity of workspace
+ + **azureml-interpret**
+ + updated azureml-interpret to interpret-community 0.16.0
+ + memory optimizations for explanation client in azureml-interpret
+ + **azureml-train-automl-runtime**
+ + Enabled streaming for ADB runs
+ + **azureml-train-core**
+ + Fix to remove additional registration on datastore for resume run feature
+ + **azureml-widgets**
+ + Customers should not see changes to existing run data visualization using the widget, and now will have support if they optionally use conditional hyperparameters.
+ + The user run widget now includes a detailed explanation for why a run is in the queued state.
++
+ ## 2021-01-11
+
+### Azure Machine Learning SDK for Python v1.20.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
+ + **azureml-contrib-optimization**
+ + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
+ + **azureml-pipeline-steps**
+ + Introducing CommandStep which would take command to process. Command can include executables, shell commands, scripts, etc.
+ + **azureml-core**
+ + Now workspace creation supports user assigned identity. Adding the uai support from SDK/CLI
+ + Fixed issue on service.reload() to pick up changes on score.py in local deployment.
+ + `run.get_details()` has an extra field named "submittedBy" which displays the author's name for this run.
+ + Edited Model.register method documentation to mention how to register model from run directly
+ + Fixed IOT-Server connection status change handling issue.
+
+
+## 2020-12-31
+### Azure Machine Learning studio Notebooks Experience (December Update)
++ **New features**
+ + User Filename search. Users are now able to search all the files saved in a workspace.
+ + Markdown Side by Side support per Notebook Cell. In a notebook cell, users can now have the option to view rendered markdown and markdown syntax side-by-side.
+ + Cell Status Bar. The status bar indicates what state a code cell is in, whether a cell run was successful, and how long it took to run.
+
++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+
+
+## 2020-12-07
+
+### Azure Machine Learning SDK for Python v1.19.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Added experimental support for test data to AutoMLStep.
+ + Added the initial core implementation of test set ingestion feature.
+ + Moved references to sklearn.externals.joblib to depend directly on joblib.
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-automl-runtime**
+ + Added the initial core implementation of test set ingestion feature.
+ + When all the strings in a text column have a length of exactly 1 character, the TfIdf word-gram featurizer won't work because its tokenizer ignores the strings with fewer than 2 characters. The current code change will allow AutoML to handle this use case.
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-contrib-automl-dnn-nlp**
+ + Initial PR for new dnn-nlp package
+ + **azureml-contrib-automl-dnn-vision**
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-contrib-automl-pipeline-steps**
+ + This new package is responsible for creating steps required for many models train/inference scenario. - It also moves the train/inference code into azureml.train.automl.runtime package so any future fixes would be automatically available through curated environment releases.
+ + **azureml-contrib-dataset**
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-core**
+ + Added the initial core implementation of test set ingestion feature.
+ + Fixing the xref warnings for documentation in azureml-core package
+ + Doc string fixes for Command support feature in SDK
+ + Adding command property to RunConfiguration. The feature enables users to run an actual command or executables on the compute through AzureML SDK.
+ + Users can delete an empty experiment given the ID of that experiment.
+ + **azureml-dataprep**
+ + Added dataset support for Spark built with Scala 2.12. This adds to the existing 2.11 support.
+ + **azureml-mlflow**
+ + AzureML-MLflow adds safe guards in remote scripts to avoid early termination of submitted runs.
+ + **azureml-pipeline-core**
+ + Fixed a bug in setting a default pipeline for pipeline endpoint created via UI
+ + **azureml-pipeline-steps**
+ + Added experimental support for test data to AutoMLStep.
+ + **azureml-tensorboard**
+ + Fixing the xref warnings for documentation in azureml-core package
+ + **azureml-train-automl-client**
+ + Added experimental support for test data to AutoMLStep.
+ + Added the initial core implementation of test set ingestion feature.
+ + introduce a new AutoML task type of "image-instance-segmentation".
+ + **azureml-train-automl-runtime**
+ + Added the initial core implementation of test set ingestion feature.
+ + Fix the computation of the raw explanations for the best AutoML model if the AutoML models are trained using validation_size setting.
+ + Moved references to sklearn.externals.joblib to depend directly on joblib.
+ + **azureml-train-core**
+ + HyperDriveRun.get_children_sorted_by_primary_metric() should complete faster now
+ + Improved error handling in HyperDrive SDK.
+ + Deprecated all estimator classes in favor of using ScriptRunConfig to configure experiment runs. Deprecated classes include:
+ + MMLBase
+ + Estimator
+ + PyTorch
+ + TensorFlow
+ + Chainer
+ + SKLearn
+ + Deprecated the use of Nccl and Gloo as valid input types for Estimator classes in favor of using PyTorchConfiguration with ScriptRunConfig.
+ + Deprecated the use of Mpi as a valid input type for Estimator classes in favor of using MpiConfiguration with ScriptRunConfig.
+ + Adding command property to runconfiguration. The feature enables users to run an actual command or executables on the compute through AzureML SDK.
+
+ + Deprecated all estimator classes in favor of using ScriptRunConfig to configure experiment runs. Deprecated classes include: + MMLBaseEstimator + Estimator + PyTorch + TensorFlow + Chainer + SKLearn
+ + Deprecated the use of Nccl and Gloo as a valid type of input for Estimator classes in favor of using PyTorchConfiguration with ScriptRunConfig.
+ + Deprecated the use of Mpi as a valid type of input for Estimator classes in favor of using MpiConfiguration with ScriptRunConfig.
+
+## 2020-11-30
+### Azure Machine Learning studio Notebooks Experience (November Update)
++ **New features**
+ + Native Terminal. Users will now have access to an integrated terminal as well as Git operation via the [integrated terminal.](../how-to-access-terminal.md)
+ + Duplicate Folder
+ + Costing for Compute Drop Down
+ + Offline Compute Pylance
+++ **Bug fixes and improvements**
+ + Improved page load times
+ + Improved performance
+ + Improved speed and kernel reliability
+ + Large File Upload. You can now upload file >95mb
+
+## 2020-11-09
+
+### Azure Machine Learning SDK for Python v1.18.0
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Improved handling of short time series by allowing padding them with gaussian noise.
+ + **azureml-automl-runtime**
+ + Throw ConfigException if a DateTime column has OutOfBoundsDatetime value
+ + Improved handling of short time series by allowing padding them with gaussian noise.
+ + Making sure that each text column can leverage char-gram transform with the n-gram range based on the length of the strings in that text column
+ + Providing raw feature explanations for best mode for AutoML experiments running on user's local compute
+ + **azureml-core**
+ + Pin the package: pyjwt to avoid pulling in breaking versions in upcoming releases.
+ + Creating an experiment will return the active or last archived experiment with that same given name if such experiment exists or a new experiment.
+ + Calling get_experiment by name will return the active or last archived experiment with that given name.
+ + Users cannot rename an experiment while reactivating it.
+ + Improved error message to include potential fixes when a dataset is incorrectly passed to an experiment (e.g. ScriptRunConfig).
+ + Improved documentation for `OutputDatasetConfig.register_on_complete` to include the behavior of what will happen when the name already exists.
+ + Specifying dataset input and output names that have the potential to collide with common environment variables will now result in a warning
+ + Repurposed `grant_workspace_access` parameter when registering datastores. Set it to `True` to access data behind virtual network from Machine Learning studio.
+ [Learn more](../how-to-enable-studio-virtual-network.md)
+ + Linked service API is refined. Instead of providing resource ID, we have 3 separate parameters sub_id, rg, and name defined in configuration.
+ + In order to enable customers to self-resolve token corruption issues, enable workspace token synchronization to be a public method.
+ + This change allows an empty string to be used as a value for a script_param
+ + **azureml-train-automl-client**
+ + Improved handling of short time series by allowing padding them with gaussian noise.
+ + **azureml-train-automl-runtime**
+ + Throw ConfigException if a DateTime column has OutOfBoundsDatetime value
+ + Added support for providing raw feature explanations for best model for AutoML experiments running on user's local compute
+ + Improved handling of short time series by allowing padding them with gaussian noise.
+ + **azureml-train-core**
+ + This change allows an empty string to be used as a value for a script_param
+ + **azureml-train-restclients-hyperdrive**
+ + README has been changed to offer more context
+ + **azureml-widgets**
+ + Add string support to charts/parallel-coordinates library for widget.
+
+## 2020-11-05
+
+### Data Labeling for image instance segmentation (polygon annotation) (preview)
+
+The image instance segmentation (polygon annotations) project type in data labeling is available now, so users can draw and annotate with polygons around the contour of the objects in the images. Users will be able assign a class and a polygon to each object which of interest within an image.
+
+Learn more about [image instance segmentation labeling](../how-to-label-data.md).
+++
+## 2020-10-26
+
+### Azure Machine Learning SDK for Python v1.17.0
++ **new examples**
+ + A new community-driven repository of examples is available at https://github.com/Azure/azureml-examples
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed an issue where get_output may raise an XGBoostError.
+ + **azureml-automl-runtime**
+ + Time/calendar based features created by AutoML will now have the prefix.
+ + Fixed an IndexError occurring during training of StackEnsemble for classification datasets with large number of classes and subsampling enabled.
+ + Fixed an issue where VotingRegressor predictions may be inaccurate after refitting the model.
+ + **azureml-core**
+ + Additional detail added about relationship between AKS deployment configuration and Azure Kubernetes Service concepts.
+ + Environment client labels support. User can label Environments and reference them by label.
+ + **azureml-dataprep**
+ + Better error message when using currently unsupported Spark with Scala 2.12.
+ + **azureml-explain-model**
+ + The azureml-explain-model package is officially deprecated
+ + **azureml-mlflow**
+ + Resolved a bug in mlflow.projects.run against azureml backend where Finalizing state was not handled properly.
+ + **azureml-pipeline-core**
+ + Add support to create, list and get pipeline schedule based one pipeline endpoint.
+ + Improved the documentation of PipelineData.as_dataset with an invalid usage example - Using PipelineData.as_dataset improperly will now result in a ValueException being thrown
+ + Changed the HyperDriveStep pipelines notebook to register the best model within a PipelineStep directly after the HyperDriveStep run.
+ + **azureml-pipeline-steps**
+ + Changed the HyperDriveStep pipelines notebook to register the best model within a PipelineStep directly after the HyperDriveStep run.
+ + **azureml-train-automl-client**
+ + Fixed an issue where get_output may raise an XGBoostError.
+
+### Azure Machine Learning studio Notebooks Experience (October Update)
++ **New features**
+ + [Full virtual network support](../how-to-enable-studio-virtual-network.md)
+ + [Focus Mode](../how-to-run-jupyter-notebooks.md#focus-mode)
+ + Save notebooks Ctrl-S
+ + Line Numbers
+++ **Bug fixes and improvements**
+ + Improvement in speed and kernel reliability
+ + Jupyter Widget UI updates
+
+## 2020-10-12
+
+### Azure Machine Learning SDK for Python v1.16.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + AKSWebservice and AKSEndpoints now support pod-level CPU and Memory resource limits. These optional limits can be used by setting `--cpu-cores-limit` and `--memory-gb-limit` flags in applicable CLI calls
+ + **azureml-core**
+ + Pin major versions of direct dependencies of azureml-core
+ + AKSWebservice and AKSEndpoints now support pod-level CPU and Memory resource limits. More information on [Kubernetes Resources and Limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits)
+ + Updated run.log_table to allow individual rows to be logged.
+ + Added static method `Run.get(workspace, run_id)` to retrieve a run only using a workspace
+ + Added instance method `Workspace.get_run(run_id)` to retrieve a run within the workspace
+ + Introducing command property in run configuration which will enable users to submit command instead of script & arguments.
+ + **azureml-interpret**
+ + fixed explanation client is_raw flag behavior in azureml-interpret
+ + **azureml-sdk**
+ + `azureml-sdk` officially support Python 3.8.
+ + **azureml-train-core**
+ + Adding TensorFlow 2.3 curated environment
+ + Introducing command property in run configuration which will enable users to submit command instead of script & arguments.
+ + **azureml-widgets**
+ + Redesigned interface for script run widget.
++
+## 2020-09-28
+
+### Azure Machine Learning SDK for Python v1.15.0
++ **Bug fixes and improvements**
+ + **azureml-contrib-interpret**
+ + LIME explainer moved from azureml-contrib-interpret to interpret-community package and image explainer removed from azureml-contrib-interpret package
+ + visualization dashboard removed from azureml-contrib-interpret package, explanation client moved to azureml-interpret package and deprecated in azureml-contrib-interpret package and notebooks updated to reflect improved API
+ + fix pypi package descriptions for azureml-interpret, azureml-explain-model, azureml-contrib-interpret and azureml-tensorboard
+ + **azureml-contrib-notebook**
+ + Pin nbcovert dependency to < 6 so that papermill 1.x continues to work.
+ + **azureml-core**
+ + Added parameters to the TensorflowConfiguration and MpiConfiguration constructor to enable a more streamlined initialization of the class attributes without requiring the user to set each individual attribute. Added a PyTorchConfiguration class for configuring distributed PyTorch jobs in ScriptRunConfig.
+ + Pin the version of azure-mgmt-resource to fix the authentication error.
+ + Support Triton No Code Deploy
+ + outputs directories specified in Run.start_logging() will now be tracked when using run in interactive scenarios. The tracked files will be visible on ML Studio upon calling Run.complete()
+ + File encoding can be now specified during dataset creation with `Dataset.Tabular.from_delimited_files` and `Dataset.Tabular.from_json_lines_files` by passing the `encoding` argument. The supported encodings are 'utf8', 'iso88591', 'latin1', 'ascii', utf16', 'utf32', 'utf8bom' and 'windows1252'.
+ + Bug fix when environment object is not passed to ScriptRunConfig constructor.
+ + Updated Run.cancel() to allow cancel of a local run from another machine.
+ + **azureml-dataprep**
+ + Fixed dataset mount timeout issues.
+ + **azureml-explain-model**
+ + fix pypi package descriptions for azureml-interpret, azureml-explain-model, azureml-contrib-interpret and azureml-tensorboard
+ + **azureml-interpret**
+ + visualization dashboard removed from azureml-contrib-interpret package, explanation client moved to azureml-interpret package and deprecated in azureml-contrib-interpret package and notebooks updated to reflect improved API
+ + azureml-interpret package updated to depend on interpret-community 0.15.0
+ + fix pypi package descriptions for azureml-interpret, azureml-explain-model, azureml-contrib-interpret and azureml-tensorboard
+ + **azureml-pipeline-core**
+ + Fixed pipeline issue with `OutputFileDatasetConfig` where the system may stop responding when`register_on_complete` is called with the `name` parameter set to a pre-existing dataset name.
+ + **azureml-pipeline-steps**
+ + Removed stale databricks notebooks.
+ + **azureml-tensorboard**
+ + fix pypi package descriptions for azureml-interpret, azureml-explain-model, azureml-contrib-interpret and azureml-tensorboard
+ + **azureml-train-automl-runtime**
+ + visualization dashboard removed from azureml-contrib-interpret package, explanation client moved to azureml-interpret package and deprecated in azureml-contrib-interpret package and notebooks updated to reflect improved API
+ + **azureml-widgets**
+ + visualization dashboard removed from azureml-contrib-interpret package, explanation client moved to azureml-interpret package and deprecated in azureml-contrib-interpret package and notebooks updated to reflect improved API
+
+## 2020-09-21
+
+### Azure Machine Learning SDK for Python v1.14.0
++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Grid Profiling removed from the SDK and is not longer supported.
+ + **azureml-accel-models**
+ + azureml-accel-models package now supports TensorFlow 2.x
+ + **azureml-automl-core**
+ + Added error handling in get_output for cases when local versions of pandas/sklearn don't match the ones used during training
+ + **azureml-automl-runtime**
+ + Fixed a bug where AutoArima iterations would fail with a PredictionException and the message: "Silent failure occurred during prediction."
+ + **azureml-cli-common**
+ + Grid Profiling removed from the SDK and is not longer supported.
+ + **azureml-contrib-server**
+ + Update description of the package for pypi overview page.
+ + **azureml-core**
+ + Grid Profiling removed from the SDK and is no longer supported.
+ + Reduce number of error messages when workspace retrieval fails.
+ + Don't show warning when fetching metadata fails
+ + New Kusto Step and Kusto Compute Target.
+ + Update document for sku parameter. Remove sku in workspace update functionality in CLI and SDK.
+ + Update description of the package for pypi overview page.
+ + Updated documentation for AzureML Environments.
+ + Expose service managed resources settings for AML workspace in SDK.
+ + **azureml-dataprep**
+ + Enable execute permission on files for Dataset mount.
+ + **azureml-mlflow**
+ + Updated AzureML MLflow documentation and notebook samples
+ + New support for MLflow projects with AzureML backend
+ + MLflow model registry support
+ + Added Azure RBAC support for AzureML-MLflow operations
+
+ + **azureml-pipeline-core**
+ + Improved the documentation of the PipelineOutputFileDataset.parse_* methods.
+ + New Kusto Step and Kusto Compute Target.
+ + Provided Swaggerurl property for pipeline-endpoint entity via that user can see the schema definition for published pipeline endpoint.
+ + **azureml-pipeline-steps**
+ + New Kusto Step and Kusto Compute Target.
+ + **azureml-telemetry**
+ + Update description of the package for pypi overview page.
+ + **azureml-train**
+ + Update description of the package for pypi overview page.
+ + **azureml-train-automl-client**
+ + Added error handling in get_output for cases when local versions of pandas/sklearn don't match the ones used during training
+ + **azureml-train-core**
+ + Update description of the package for pypi overview page.
+
+## 2020-08-31
+
+### Azure Machine Learning SDK for Python v1.13.0
++ **Preview features**
+ + **azureml-core**
+ With the new output datasets capability, you can write back to cloud storage including Blob, ADLS Gen 1, ADLS Gen 2, and FileShare. You can configure where to output data, how to output data (via mount or upload), whether to register the output data for future reuse and sharing and pass intermediate data between pipeline steps seamlessly. This enables reproducibility, sharing, prevents duplication of data, and results in cost efficiency and productivity gains. [Learn how to use it](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig)
+
++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Added validated_{platform}_requirements.txt file for pinning all pip dependencies for AutoML.
+ + This release supports models greater than 4 Gb.
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-automl-runtime**
+ + Set horovod for text DNN to always use fp16 compression.
+ + This release supports models greater than 4 Gb.
+ + Fixed issue where AutoML fails with ImportError: cannot import name `RollingOriginValidator`.
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-contrib-fairness**
+ + Provide a short description for azureml-contrib-fairness.
+ + **azureml-contrib-pipeline-steps**
+ + Added message indicating this package is deprecated and user should use azureml-pipeline-steps instead.
+ + **azureml-core**
+ + Added list key command for workspace.
+ + Add tags parameter in Workspace SDK and CLI.
+ + Fixed the bug where submitting a child run with Dataset will fail due to `TypeError: can't pickle _thread.RLock objects`.
+ + Adding page_count default/documentation for Model list().
+ + Modify CLI&SDK to take adbworkspace parameter and Add workspace adb lin/unlink runner.
+ + Fix bug in Dataset.update that caused newest Dataset version to be updated not the version of the Dataset update was called on.
+ + Fix bug in Dataset.get_by_name that would show the tags for the newest Dataset version even when a specific older version was retrieved.
+ + **azureml-interpret**
+ + Added probability outputs to shap scoring explainers in azureml-interpret based on shap_values_output parameter from original explainer.
+ + **azureml-pipeline-core**
+ + Improved `PipelineOutputAbstractDataset.register`'s documentation.
+ + **azureml-train-automl-client**
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-train-automl-runtime**
+ + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
+ + **azureml-train-core**
+ + Users must now provide a valid hyperparameter_sampling arg when creating a HyperDriveConfig. In addition, the documentation for HyperDriveRunConfig has been edited to inform users of the deprecation of HyperDriveRunConfig.
+ + Reverting PyTorch Default Version to 1.4.
+ + Adding PyTorch 1.6 & TensorFlow 2.2 images and curated environment.
+
+### Azure Machine Learning studio Notebooks Experience (August Update)
++ **New features**
+ + New Getting started landing Page
+
++ **Preview features**
+ + Gather feature in Notebooks. With the [Gather](../how-to-run-jupyter-notebooks.md#clean-your-notebook-preview) feature, users can now easily clean up notebooks with, Gather uses an automated dependency analysis of your notebook, ensuring the essential code is kept, but removing any irrelevant pieces.
+++ **Bug fixes and improvements**
+ + Improvement in speed and reliability
+ + Dark mode bugs fixed
+ + Output Scroll Bugs fixed
+ + Sample Search now searches all the content of all the files in the Azure Machine Learning sample notebooks repo
+ + Multi-line R cells can now run
+ + "I trust contents of this file" is now auto checked after first time
+ + Improved Conflict resolution dialog, with new "Make a copy" option
+
+## 2020-08-17
+
+### Azure Machine Learning SDK for Python v1.12.0
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Add image_name and image_label parameters to Model.package() to enable renaming the built package image.
+ + **azureml-automl-core**
+ + AutoML raises a new error code from dataprep when content is modified while being read.
+ + **azureml-automl-runtime**
+ + Added alerts for the user when data contains missing values but featurization is turned off.
+ + Fixed child run failures when data contains nan and featurization is turned off.
+ + AutoML raises a new error code from dataprep when content is modified while being read.
+ + Updated normalization for forecasting metrics to occur by grain.
+ + Improved calculation of forecast quantiles when lookback features are disabled.
+ + Fixed bool sparse matrix handling when computing explanations after AutoML.
+ + **azureml-core**
+ + A new method `run.get_detailed_status()` now shows the detailed explanation of current run status. It is currently only showing explanation for `Queued` status.
+ + Add image_name and image_label parameters to Model.package() to enable renaming the built package image.
+ + New method `set_pip_requirements()` to set the entire pip section in [`CondaDependencies`](/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies) at once.
+ + Enable registering credential-less ADLS Gen2 datastore.
+ + Improved error message when trying to download or mount an incorrect dataset type.
+ + Update time series dataset filter sample notebook with more examples of partition_timestamp that provides filter optimization.
+ + Change the sdk and CLI to accept subscriptionId, resourceGroup, workspaceName, peConnectionName as parameters instead of ArmResourceId when deleting private endpoint connection.
+ + Experimental Decorator shows class name for easier identification.
+ + Descriptions for the Assets inside of Models are no longer automatically generated based on a Run.
+ + **azureml-datadrift**
+ + Mark create_from_model API in DataDriftDetector as to be deprecated.
+ + **azureml-dataprep**
+ + Improved error message when trying to download or mount an incorrect dataset type.
+ + **azureml-pipeline-core**
+ + Fixed bug when deserializing pipeline graph that contains registered datasets.
+ + **azureml-pipeline-steps**
+ + RScriptStep supports RSection from azureml.core.environment.
+ + Removed the passthru_automl_config parameter from the `AutoMLStep` public API and converted it to an internal only parameter.
+ + **azureml-train-automl-client**
+ + Removed local asynchronous, managed environment runs from AutoML. All local runs will run in the environment the run was launched from.
+ + Fixed snapshot issues when submitting AutoML runs with no user-provided scripts.
+ + Fixed child run failures when data contains nan and featurization is turned off.
+ + **azureml-train-automl-runtime**
+ + AutoML raises a new error code from dataprep when content is modified while being read.
+ + Fixed snapshot issues when submitting AutoML runs with no user-provided scripts.
+ + Fixed child run failures when data contains nan and featurization is turned off.
+ + **azureml-train-core**
+ + Added support for specifying pip options (for example --extra-index-url) in the pip requirements file passed to an [`Estimator`](/python/api/azureml-train-core/azureml.train.estimator.estimator) through `pip_requirements_file` parameter.
++
+## 2020-08-03
+
+### Azure Machine Learning SDK for Python v1.11.0
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Fix model framework and model framework not passed in run object in CLI model registration path
+ + Fix CLI amlcompute identity show command to show tenant ID and principal ID
+ + **azureml-train-automl-client**
+ + Added get_best_child () to AutoMLRun for fetching the best child run for an AutoML Run without downloading the associated model.
+ + Added ModelProxy object that allow predict or forecast to be run on a remote training environment without downloading the model locally.
+ + Unhandled exceptions in AutoML now point to a known issues HTTP page, where more information about the errors can be found.
+ + **azureml-core**
+ + Model names can be 255 characters long.
+ + Environment.get_image_details() return object type changed. `DockerImageDetails` class replaced `dict`, image details are available from the new class properties. Changes are backward compatible.
+ + Fix bug for Environment.from_pip_requirements() to preserve dependencies structure
+ + Fixed a bug where log_list would fail if an int and double were included in the same list.
+ + While enabling private link on an existing workspace, please note that if there are compute targets associated with the workspace, those targets will not work if they are not behind the same virtual network as the workspace private endpoint.
+ + Made `as_named_input` optional when using datasets in experiments and added `as_mount` and `as_download` to `FileDataset`. The input name will automatically generated if `as_mount` or `as_download` is called.
+ + **azureml-automl-core**
+ + Unhandled exceptions in AutoML now point to a known issues HTTP page, where more information about the errors can be found.
+ + Added get_best_child () to AutoMLRun for fetching the best child run for an AutoML Run without downloading the associated model.
+ + Added ModelProxy object that allows predict or forecast to be run on a remote training environment without downloading the model locally.
+ + **azureml-pipeline-steps**
+ + Added `enable_default_model_output` and `enable_default_metrics_output` flags to `AutoMLStep`. These flags can be used to enable/disable the default outputs.
++
+## 2020-07-20
+
+### Azure Machine Learning SDK for Python v1.10.0
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it will be automatically created.
+ + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ + **azureml-automl-runtime**
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it will be automatically created.
+ + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ + AutoML Forecasting now supports rolling evaluation, which applies to the use case that the length of a test or validation set is longer than the input horizon, and known y_pred value is used as forecasting context.
+ + **azureml-core**
+ + Warning messages will be printed if no files were downloaded from the datastore in a run.
+ + Added documentation for `skip_validation` to the `Datastore.register_azure_sql_database method`.
+ + Users are required to upgrade to sdk v1.10.0 or above to create an auto approved private endpoint. This includes the Notebook resource that is usable behind the VNet.
+ + Expose NotebookInfo in the response of get workspace.
+ + Changes to have calls to list compute targets and getting compute target succeed on a remote run. Sdk functions to get compute target and list workspace compute targets will now work in remote runs.
+ + Add deprecation messages to the class descriptions for azureml.core.image classes.
+ + Throw exception and clean up workspace and dependent resources if workspace private endpoint creation fails.
+ + Support workspace sku upgrade in workspace update method.
+ + **azureml-datadrift**
+ + Update matplotlib version from 3.0.2 to 3.2.1 to support Python 3.8.
+ + **azureml-dataprep**
+ + Added support of web url data sources with `Range` or `Head` request.
+ + Improved stability for file dataset mount and download.
+ + **azureml-train-automl-client**
+ + Fixed issues related to removal of `RequirementParseError` from setuptools.
+ + Use docker instead of conda for local runs submitted using "compute_target='local'"
+ + The iteration duration printed to the console has been corrected. Previously, the iteration duration was sometimes printed as run end time minus run creation time. It has been corrected to equal run end time minus run start time.
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it will be automatically created.
+ + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ + **azureml-train-automl-runtime**
+ + Improved console output when best model explanations fail.
+ + Renamed input parameter to "blocked_models" to remove a sensitive term.
+ + Renamed input parameter to "allowed_models" to remove a sensitive term.
+ + Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+
+
+## 2020-07-06
+
+### Azure Machine Learning SDK for Python v1.9.0
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Replaced get_model_path() with AZUREML_MODEL_DIR environment variable in AutoML autogenerated scoring script. Also added telemetry to track failures during init().
+ + Removed the ability to specify `enable_cache` as part of AutoMLConfig
+ + Fixed a bug where runs may fail with service errors during specific forecasting runs
+ + Improved error handling around specific models during `get_output`
+ + Fixed call to fitted_model.fit(X, y) for classification with y transformer
+ + Enabled customized forward fill imputer for forecasting tasks
+ + A new ForecastingParameters class will be used instead of forecasting parameters in a dict format
+ + Improved target lag autodetection
+ + Added limited availability of multi-noded, multi-gpu distributed featurization with BERT
+ + **azureml-automl-runtime**
+ + Prophet now does additive seasonality modeling instead of multiplicative.
+ + Fixed the issue when short grains, having frequencies different from ones of the long grains will result in failed runs.
+ + **azureml-contrib-automl-dnn-vision**
+ + Collect system/gpu stats and log averages for training and scoring
+ + **azureml-contrib-mir**
+ + Added support for enable-app-insights flag in ManagedInferencing
+ + **azureml-core**
+ + A validate parameter to these APIs by allowing validation to be skipped when the data source is not accessible from the current compute.
+ + TabularDataset.time_before(end_time, include_boundary=True, validate=True)
+ + TabularDataset.time_after(start_time, include_boundary=True, validate=True)
+ + TabularDataset.time_recent(time_delta, include_boundary=True, validate=True)
+ + TabularDataset.time_between(start_time, end_time, include_boundary=True, validate=True)
+ + Added framework filtering support for model list, and added NCD AutoML sample in notebook back
+ + For Datastore.register_azure_blob_container and Datastore.register_azure_file_share (only options that support SAS token), we have updated the doc strings for the `sas_token` field to include minimum permissions requirements for typical read and write scenarios.
+ + Deprecating _with_auth param in ws.get_mlflow_tracking_uri()
+ + **azureml-mlflow**
+ + Add support for deploying local file:// models with AzureML-MLflow
+ + Deprecating _with_auth param in ws.get_mlflow_tracking_uri()
+ + **azureml-opendatasets**
+ + Recently published Covid-19 tracking datasets are now available with the SDK
+ + **azureml-pipeline-core**
+ + Log out warning when "azureml-defaults" is not included as part of pip-dependency
+ + Improve Note rendering.
+ + Added support for quoted line breaks when parsing delimited files to PipelineOutputFileDataset.
+ + The PipelineDataset class is deprecated. For more information, see https://aka.ms/dataset-deprecation. Learn how to use dataset with pipeline, see https://aka.ms/pipeline-with-dataset.
+ + **azureml-pipeline-steps**
+ + Doc updates to azureml-pipeline-steps.
+ + Added support in ParallelRunConfig's `load_yaml()` for users to define Environments inline with the rest of the config or in a separate file
+ + **azureml-train-automl-client**.
+ + Removed the ability to specify `enable_cache` as part of AutoMLConfig
+ + **azureml-train-automl-runtime**
+ + Added limited availability of multi-noded, multi-gpu distributed featurization with BERT.
+ + Added error handling for incompatible packages in ADB based automated machine learning runs.
+ + **azureml-widgets**
+ + Doc updates to azureml-widgets.
+
+
+## 2020-06-22
+
+### Azure Machine Learning SDK for Python v1.8.0
+
+ + **Preview features**
+ + **azureml-contrib-fairness**
+ The `azureml-contrib-fairness` package provides integration between the open-source fairness assessment and unfairness mitigation package [Fairlearn](https://fairlearn.github.io) and Azure Machine Learning studio. In particular, the package enables model fairness evaluation dashboards to be uploaded as part of an AzureML Run and appear in Azure Machine Learning studio
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Support getting logs of init container.
+ + Added new CLI commands to manage ComputeInstance
+ + **azureml-automl-core**
+ + Users are now able to enable stack ensemble iteration for Time series tasks with a warning that it could potentially overfit.
+ + Added a new type of user exception that is raised if the cache store contents have been tampered with
+ + **azureml-automl-runtime**
+ + Class Balancing Sweeping will no longer be enabled if user disables featurization.
+ + **azureml-contrib-notebook**
+ + Doc improvements to azureml-contrib-notebook package.
+ + **azureml-contrib-pipeline-steps**
+ + Doc improvements to azureml-contrib--pipeline-steps package.
+ + **azureml-core**
+ + Add set_connection, get_connection, list_connections, delete_connection functions for customer to operate on workspace connection resource
+ + Documentation updates to azureml-coore/azureml.exceptions package.
+ + Documentation updates to azureml-core package.
+ + Doc updates to ComputeInstance class.
+ + Doc improvements to azureml-core/azureml.core.compute package.
+ + Doc improvements for webservice-related classes in azureml-core.
+ + Support user-selected datastore to store profiling data
+ + Added expand and page_count property for model list API
+ + Fixed bug where removing the overwrite property will cause the submitted run to fail with deserialization error.
+ + Fixed inconsistent folder structure when downloading or mounting a FileDataset referencing to a single file.
+ + Loading a dataset of parquet files to_spark_dataframe is now faster and supports all parquet and Spark SQL datatypes.
+ + Support getting logs of init container.
+ + AutoML runs are now marked as child run of Parallel Run Step.
+ + **azureml-datadrift**
+ + Doc improvements to azureml-contrib-notebook package.
+ + **azureml-dataprep**
+ + Loading a dataset of parquet files to_spark_dataframe is now faster and supports all parquet and Spark SQL datatypes.
+ + Better memory handling for OutOfMemory issue for to_pandas_dataframe.
+ + **azureml-interpret**
+ + Upgraded azureml-interpret to use interpret-community version 0.12.*
+ + **azureml-mlflow**
+ + Doc improvements to azureml-mlflow.
+ + Adds support for AML model registry with MLFlow.
+ + **azureml-opendatasets**
+ + Added support for Python 3.8
+ + **azureml-pipeline-core**
+ + Updated `PipelineDataset`'s documentation to make it clear it is an internal class.
+ + ParallelRunStep updates to accept multiple values for one argument, for example: "--group_column_names", "Col1", "Col2", "Col3"
+ + Removed the passthru_automl_config requirement for intermediate data usage with AutoMLStep in Pipelines.
+ + **azureml-pipeline-steps**
+ + Doc improvements to azureml-pipeline-steps package.
+ + Removed the passthru_automl_config requirement for intermediate data usage with AutoMLStep in Pipelines.
+ + **azureml-telemetry**
+ + Doc improvements to azureml-telemetry.
+ + **azureml-train-automl-client**
+ + Fixed a bug where `experiment.submit()` called twice on an `AutoMLConfig` object resulted in different behavior.
+ + Users are now able to enable stack ensemble iteration for Time series tasks with a warning that it could potentially overfit.
+ + Changed AutoML run behavior to raise UserErrorException if service throws user error
+ + Fixes a bug that caused azureml_automl.log to not get generated or be missing logs when performing an AutoML experiment on a remote compute target.
+ + For Classification data sets with imbalanced classes, we will apply Weight Balancing, if the feature sweeper determines that for subsampled data, Weight Balancing improves the performance of the classification task by a certain threshold.
+ + AutoML runs are now marked as child run of Parallel Run Step.
+ + **azureml-train-automl-runtime**
+ + Changed AutoML run behavior to raise UserErrorException if service throws user error
+ + AutoML runs are now marked as child run of Parallel Run Step.
+
+
+## 2020-06-08
+
+### Azure Machine Learning SDK for Python v1.7.0
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Completed the removal of model profiling from mir contrib by cleaning up CLI commands and package dependencies, Model profiling is available in core.
+ + Upgrades the min Azure CLI version to 2.3.0
+ + **azureml-automl-core**
+ + Better exception message on featurization step fit_transform() due to custom transformer parameters.
+ + Add support for multiple languages for deep learning transformer models such as BERT in automated ML.
+ + Remove deprecated lag_length parameter from documentation.
+ + The forecasting parameters documentation was improved. The lag_length parameter was deprecated.
+ + **azureml-automl-runtime**
+ + Fixed the error raised when one of categorical columns is empty in forecast/test time.
+ + Fix the run failures happening when the lookback features are enabled and the data contain short grains.
+ + Fixed the issue with duplicated time index error message when lags or rolling windows were set to 'auto'.
+ + Fixed the issue with Prophet and Arima models on data sets, containing the lookback features.
+ + Added support of dates before 1677-09-21 or after 2262-04-11 in columns other than date time in the forecasting tasks. Improved error messages.
+ + The forecasting parameters documentation was improved. The lag_length parameter was deprecated.
+ + Better exception message on featurization step fit_transform() due to custom transformer parameters.
+ + Add support for multiple languages for deep learning transformer models such as BERT in automated ML.
+ + Cache operations that result in some OSErrors will raise user error.
+ + Added checks to ensure training and validation data have the same number and set of columns
+ + Fixed issue with the autogenerated AutoML scoring script when the data contains quotation marks
+ + Enabling explanations for AutoML Prophet and ensembled models that contain Prophet model.
+ + A recent customer issue revealed a live-site bug wherein we log messages along Class-Balancing-Sweeping even when the Class Balancing logic isn't properly enabled. Removing those logs/messages with this PR.
+ + **azureml-cli-common**
+ + Completed the removal of model profiling from mir contrib by cleaning up CLI commands and package dependencies, Model profiling is available in core.
+ + **azureml-contrib-reinforcementlearning**
+ + Load testing tool
+ + **azureml-core**
+ + Documentation changes on Script_run_config.py
+ + Fixes a bug with printing the output of run submit-pipeline CLI
+ + Documentation improvements to azureml-core/azureml.data
+ + Fixes issue retrieving storage account using hdfs getconf command
+ + Improved register_azure_blob_container and register_azure_file_share documentation
+ + **azureml-datadrift**
+ + Improved implementation for disabling and enabling dataset drift monitors
+ + **azureml-interpret**
+ + In explanation client, remove NaNs or Infs prior to json serialization on upload from artifacts
+ + Update to latest version of interpret-community to improve out of memory errors for global explanations with many features and classes
+ + Add true_ys optional parameter to explanation upload to enable additional features in the studio UI
+ + Improve download_model_explanations() and list_model_explanations() performance
+ + Small tweaks to notebooks, to aid with debugging
+ + **azureml-opendatasets**
+ + azureml-opendatasets needs azureml-dataprep version 1.4.0 or higher. Added warning if lower version is detected
+ + **azureml-pipeline-core**
+ + This change allows user to provide an optional runconfig to the moduleVersion when calling module.Publish_python_script.
+ + Enable node account can be a pipeline parameter in ParallelRunStep in azureml.pipeline.steps
+ + **azureml-pipeline-steps**
+ + This change allows user to provide an optional runconfig to the moduleVersion when calling module.Publish_python_script.
+ + **azureml-train-automl-client**
+ + Add support for multiple languages for deep learning transformer models such as BERT in automated ML.
+ + Remove deprecated lag_length parameter from documentation.
+ + The forecasting parameters documentation was improved. The lag_length parameter was deprecated.
+ + **azureml-train-automl-runtime**
+ + Enabling explanations for AutoML Prophet and ensembled models that contain Prophet model.
+ + Documentation updates to azureml-train-automl-* packages.
+ + **azureml-train-core**
+ + Supporting TensorFlow version 2.1 in the PyTorch Estimator
+ + Improvements to azureml-train-core package.
+
+## 2020-05-26
+
+### Azure Machine Learning SDK for Python v1.6.0
+++ **New features**
+ + **azureml-automl-runtime**
+ + AutoML Forecasting now supports customers forecast beyond the pre-specified max-horizon without retraining the model. When the forecast destination is farther into the future than the specified maximum horizon, the forecast() function will still make point predictions out to the later date using a recursive operation mode. For the illustration of the new feature, please see the "Forecasting farther than the maximum horizon" section of "forecasting-forecast-function" notebook in [folder](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning)."
+
+ + **azureml-pipeline-steps**
+ + ParallelRunStep is now released and is part of **azureml-pipeline-steps** package. Existing ParallelRunStep in **azureml-contrib-pipeline-steps** package is deprecated. Changes from public preview version:
+ + Added `run_max_try` optional configurable parameter to control max call to run method for any given batch, default value is 3.
+ + No PipelineParameters are autogenerated anymore. Following configurable values can be set as PipelineParameter explicitly.
+ + mini_batch_size
+ + node_count
+ + process_count_per_node
+ + logging_level
+ + run_invocation_timeout
+ + run_max_try
+ + Default value for process_count_per_node is changed to 1. User should tune this value for better performance. Best practice is to set as the number of GPU or CPU node has.
+ + ParallelRunStep does not inject any packages, user needs to include **azureml-core** and **azureml-dataprep[pandas, fuse]** packages in environment definition. If custom docker image is used with user_managed_dependencies then user need to install conda on the image.
+
++ **Breaking changes**
+ + **azureml-pipeline-steps**
+ + Deprecated the use of azureml.dprep.Dataflow as a valid type of input for AutoMLConfig
+ + **azureml-train-automl-client**
+ + Deprecated the use of azureml.dprep.Dataflow as a valid type of input for AutoMLConfig
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixed the bug where a warning may be printed during `get_output` that asked user to downgrade client.
+ + Updated Mac to rely on cudatoolkit=9.0 as it is not available at version 10 yet.
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ + Improved logging in AutoML
+ + The error handling for custom featurization in forecasting tasks was improved.
+ + Added functionality to allow users to include lagged features to generate forecasts.
+ + Updates to error message to correctly display user error.
+ + Support for cv_split_column_names to be used with training_data
+ + Update logging the exception message and traceback.
+ + **azureml-automl-runtime**
+ + Enable guardrails for forecasting missing value imputations.
+ + Improved logging in AutoML
+ + Added fine grained error handling for data prep exceptions
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ + `azureml-train-automl-runtime` and `azureml-automl-runtime` have updated dependencies for `pytorch`, `scipy`, and `cudatoolkit`. we now support `pytorch==1.4.0`, `scipy>=1.0.0,<=1.3.1`, and `cudatoolkit==10.1.243`.
+ + The error handling for custom featurization in forecasting tasks was improved.
+ + The forecasting data set frequency detection mechanism was improved.
+ + Fixed issue with Prophet model training on some data sets.
+ + The auto detection of max horizon during the forecasting was improved.
+ + Added functionality to allow users to include lagged features to generate forecasts.
+ + Adds functionality in the forecast function to enable providing forecasts beyond the trained horizon without retraining the forecasting model.
+ + Support for cv_split_column_names to be used with training_data
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Improved logging in AutoML
+ + **azureml-contrib-mir**
+ + Added support for Windows services in ManagedInferencing
+ + Remove old MIR workflows such as attach MIR compute, SingleModelMirWebservice class - Clean out model profiling placed in contrib-mir package
+ + **azureml-contrib-pipeline-steps**
+ + Minor fix for YAML support
+ + ParallelRunStep is released to General Availability - azureml.contrib.pipeline.steps has a deprecation notice and is move to azureml.pipeline.steps
+ + **azureml-contrib-reinforcementlearning**
+ + RL Load testing tool
+ + RL estimator has smart defaults
+ + **azureml-core**
+ + Remove old MIR workflows such as attach MIR compute, SingleModelMirWebservice class - Clean out model profiling placed in contrib-mir package
+ + Fixed the information provided to the user in case of profiling failure: included request ID and reworded the message to be more meaningful. Added new profiling workflow to profiling runners
+ + Improved error text in case of Dataset execution failures.
+ + Workspace private link CLI support added.
+ + Added an optional parameter `invalid_lines` to `Dataset.Tabular.from_json_lines_files` that allows for specifying how to handle lines that contain invalid JSON.
+ + We will be deprecating the run-based creation of compute in the next release. We recommend creating an actual Amlcompute cluster as a persistent compute target, and using the cluster name as the compute target in your run configuration. See example notebook here: aka.ms/amlcomputenb
+ + Improved error messages in case of Dataset execution failures.
+ + **azureml-dataprep**
+ + Made warning to upgrade pyarrow version more explicit.
+ + Improved error handling and message returned in case of failure to execute dataflow.
+ + **azureml-interpret**
+ + Documentation updates to azureml-interpret package.
+ + Fixed interpretability packages and notebooks to be compatible with latest sklearn update
+ + **azureml-opendatasets**
+ + return None when there is no data returned.
+ + Improve the performance of to_pandas_dataframe.
+ + **azureml-pipeline-core**
+ + Quick fix for ParallelRunStep where loading from YAML was broken
+ + ParallelRunStep is released to General Availability - azureml.contrib.pipeline.steps has a deprecation notice and is move to azureml.pipeline.steps - new features include: 1. Datasets as PipelineParameter 2. New parameter run_max_retry 3. Configurable append_row output file name
+ + **azureml-pipeline-steps**
+ + Deprecated azureml.dprep.Dataflow as a valid type for input data.
+ + Quick fix for ParallelRunStep where loading from YAML was broken
+ + ParallelRunStep is released to General Availability - azureml.contrib.pipeline.steps has a deprecation notice and is move to azureml.pipeline.steps - new features include:
+ + Datasets as PipelineParameter
+ + New parameter run_max_retry
+ + Configurable append_row output file name
+ + **azureml-telemetry**
+ + Update logging the exception message and traceback.
+ + **azureml-train-automl-client**
+ + Improved logging in AutoML
+ + Updates to error message to correctly display user error.
+ + Support for cv_split_column_names to be used with training_data
+ + Deprecated azureml.dprep.Dataflow as a valid type for input data.
+ + Updated Mac to rely on cudatoolkit=9.0 as it is not available at version 10 yet.
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ + `azureml-train-automl-runtime` and `azureml-automl-runtime` have updated dependencies for `pytorch`, `scipy`, and `cudatoolkit`. we now support `pytorch==1.4.0`, `scipy>=1.0.0,<=1.3.1`, and `cudatoolkit==10.1.243`.
+ + Added functionality to allow users to include lagged features to generate forecasts.
+ + **azureml-train-automl-runtime**
+ + Improved logging in AutoML
+ + Added fine grained error handling for data prep exceptions
+ + Removing restrictions on prophet and xgboost models when trained on remote compute.
+ + `azureml-train-automl-runtime` and `azureml-automl-runtime` have updated dependencies for `pytorch`, `scipy`, and `cudatoolkit`. we now support `pytorch==1.4.0`, `scipy>=1.0.0,<=1.3.1`, and `cudatoolkit==10.1.243`.
+ + Updates to error message to correctly display user error.
+ + Support for cv_split_column_names to be used with training_data
+ + **azureml-train-core**
+ + Added a new set of HyperDrive specific exceptions. azureml.train.hyperdrive will now throw detailed exceptions.
+ + **azureml-widgets**
+ + AzureML Widgets is not displaying in JupyterLab
+
+
+## 2020-05-11
+
+### Azure Machine Learning SDK for Python v1.5.0
+++ **New features**
+ + **Preview features**
+ + **azureml-contrib-reinforcementlearning**
+ + Azure Machine Learning is releasing preview support for reinforcement learning using the [Ray](https://ray.io) framework. The `ReinforcementLearningEstimator` enables training of reinforcement learning agents across GPU and CPU compute targets in Azure Machine Learning.
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Fixes an accidentally left behind warning log in my previous PR. The log was used for debugging and accidentally was left behind.
+ + Bug fix: inform clients about partial failure during profiling
+ + **azureml-automl-core**
+ + Speed up Prophet/AutoArima model in AutoML forecasting by enabling parallel fitting for the time series when data sets have multiple time series. In order to benefit from this new feature, you are recommended to set "max_cores_per_iteration = -1" (that is, using all the available cpu cores) in AutoMLConfig.
+ + Fix KeyError on printing guardrails in console interface
+ + Fixed error message for experimentation_timeout_hours
+ + Deprecated TensorFlow models for AutoML.
+ + **azureml-automl-runtime**
+ + Fixed error message for experimentation_timeout_hours
+ + Fixed unclassified exception when trying to deserialize from cache store
+ + Speed up Prophet/AutoArima model in AutoML forecasting by enabling parallel fitting for the time series when data sets have multiple time series.
+ + Fixed the forecasting with enabled rolling window on the data sets where test/prediction set does not contain one of grains from the training set.
+ + Improved handling of missing data
+ + Fixed issue with prediction intervals during forecasting on data sets, containing time series, which are not aligned in time.
+ + Added better validation of data shape for the forecasting tasks.
+ + Improved the frequency detection.
+ + Created better error message if the cross validation folds for forecasting tasks cannot be generated.
+ + Fix console interface to print missing value guardrail correctly.
+ + Enforcing datatype checks on cv_split_indices input in AutoMLConfig.
+ + **azureml-cli-common**
+ + Bug fix: inform clients about partial failure during profiling
+ + **azureml-contrib-mir**
+ + Adds a class azureml.contrib.mir.RevisionStatus which relays information about the currently deployed MIR revision and the most recent version specified by the user. This class is included in the MirWebservice object under 'deployment_status' attribute.
+ + Enables update on Webservices of type MirWebservice and its child class SingleModelMirWebservice.
+ + **azureml-contrib-reinforcementlearning**
+ + Added support for Ray 0.8.3
+ + AmlWindowsCompute only supports Azure Files as mounted storage
+ + Renamed health_check_timeout to health_check_timeout_seconds
+ + Fixed some class/method descriptions.
+ + **azureml-core**
+ + Enabled WASB -> Blob conversions in Azure Government and China clouds.
+ + Fixes bug to allow Reader roles to use az ml run CLI commands to get run information
+ + Removed unnecessary logging during Azure ML Remote Runs with input Datasets.
+ + RCranPackage now supports "version" parameter for the CRAN package version.
+ + Bug fix: inform clients about partial failure during profiling
+ + Added European-style float handling for azureml-core.
+ + Enabled workspace private link features in Azure ml sdk.
+ + When creating a TabularDataset using `from_delimited_files`, you can specify whether empty values should be loaded as None or as empty string by setting the boolean argument `empty_as_string`.
+ + Added European-style float handling for datasets.
+ + Improved error messages on dataset mount failures.
+ + **azureml-datadrift**
+ + Data Drift results query from the SDK had a bug that didn't differentiate the minimum, maximum, and mean feature metrics, resulting in duplicate values. We have fixed this bug by prefixing target or baseline to the metric names. Before: duplicate min, max, mean. After: target_min, target_max, target_mean, baseline_min, baseline_max, baseline_mean.
+ + **azureml-dataprep**
+ + Improve handling of write restricted Python environments when ensuring .NET Dependencies required for data delivery.
+ + Fixed Dataflow creation on file with leading empty records.
+ + Added error handling options for `to_partition_iterator` similar to `to_pandas_dataframe`.
+ + **azureml-interpret**
+ + Reduced explanation path length limits to reduce likelihood of going over Windows limit
+ + Bugfix for sparse explanations created with the mimic explainer using a linear surrogate model.
+ + **azureml-opendatasets**
+ + Fix issue of MNIST's columns are parsed as string, which should be int.
+ + **azureml-pipeline-core**
+ + Allowing the option to regenerate_outputs when using a module that is embedded in a ModuleStep.
+ + **azureml-train-automl-client**
+ + Deprecated TensorFlow models for AutoML.
+ + Fix users allow listing unsupported algorithms in local mode
+ + Doc fixes to AutoMLConfig.
+ + Enforcing datatype checks on cv_split_indices input in AutoMLConfig.
+ + Fixed issue with AutoML runs failing in show_output
+ + **azureml-train-automl-runtime**
+ + Fixing a bug in Ensemble iterations that was preventing model download timeout from kicking in successfully.
+ + **azureml-train-core**
+ + Fix typo in azureml.train.dnn.Nccl class.
+ + Supporting PyTorch version 1.5 in the PyTorch Estimator
+ + Fix the issue that framework image can't be fetched in Azure Government region when using training framework estimators
+
+
+## 2020-05-04
+**New Notebook Experience**
+
+You can now create, edit, and share machine learning notebooks and files directly inside the studio web experience of Azure Machine Learning. You can use all the classes and methods available in [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) from inside these notebooks.
+To get started, visit the [Run Jupyter Notebooks in your workspace](../how-to-run-jupyter-notebooks.md) article.
+
+**New Features Introduced:**
+++ Improved editor (Monaco editor) used by VS Code ++ UI/UX improvements++ Cell Toolbar++ New Notebook Toolbar and Compute Controls++ Notebook Status Bar ++ Inline Kernel Switching++ R Support++ Accessibility and Localization improvements++ Command Palette++ Additional Keyboard Shortcuts++ Auto save++ Improved performance and reliability+
+Access the following web-based authoring tools from the studio:
+
+| Web-based tool | Description |
+|||
+| Azure ML Studio Notebooks | First in-class authoring for notebook files and support all operation available in the Azure ML Python SDK. |
+
+## 2020-04-27
+
+### Azure Machine Learning SDK for Python v1.4.0
+++ **New features**
+ + AmlCompute clusters now support setting up a managed identity on the cluster at the time of provisioning. Just specify whether you would like to use a system-assigned identity or a user-assigned identity, and pass an identityId for the latter. You can then set up permissions to access various resources like Storage or ACR in a way that the identity of the compute gets used to securely access the data, instead of a token-based approach that AmlCompute employs today. Check out our SDK reference for more information on the parameters.
+
+++ **Breaking changes**
+ + AmlCompute clusters supported a Preview feature around run-based creation, that we are planning on deprecating in two weeks. You can continue to create persistent compute targets as always by using the Amlcompute class, but the specific approach of specifying the identifier "amlcompute" as the compute target in run config will not be supported in the near future.
+++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + Enable support for unhashable type when calculating number of unique values in a column.
+ + **azureml-core**
+ + Improved stability when reading from Azure Blob Storage using a TabularDataset.
+ + Improved documentation for the `grant_workspace_msi` parameter for `Datastore.register_azure_blob_store`.
+ + Fixed bug with `datastore.upload` to support the `src_dir` argument ending with a `/` or `\`.
+ + Added actionable error message when trying to upload to an Azure Blob Storage datastore that does not have an access key or SAS token.
+ + **azureml-interpret**
+ + Added upper bound to file size for the visualization data on uploaded explanations.
+ + **azureml-train-automl-client**
+ + Explicitly checking for label_column_name & weight_column_name parameters for AutoMLConfig to be of type string.
+ + **azureml-contrib-pipeline-steps**
+ + ParallelRunStep now supports dataset as pipeline parameter. User can construct pipeline with sample dataset and can change input dataset of the same type (file or tabular) for new pipeline run.
+
+
+## 2020-04-13
+
+### Azure Machine Learning SDK for Python v1.3.0
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Added additional telemetry around post-training operations.
+ + Speeds up automatic ARIMA training by using conditional sum of squares (CSS) training for series of length longer than 100. The length used is stored as the constant ARIMA_TRIGGER_CSS_TRAINING_LENGTH w/in the TimeSeriesInternal class at /src/azureml-automl-core/azureml/automl/core/shared/constants.py
+ + The user logging of forecasting runs was improved, now more information on what phase is currently running will be shown in the log
+ + Disallowed target_rolling_window_size to be set to values less than 2
+ + **azureml-automl-runtime**
+ + Improved the error message shown when duplicated timestamps are found.
+ + Disallowed target_rolling_window_size to be set to values less than 2.
+ + Fixed the lag imputation failure. The issue was caused by the insufficient number of observations needed to seasonally decompose a series. The "de-seasonalized" data is used to compute a partial autocorrelation function (PACF) to determine the lag length.
+ + Enabled column purpose featurization customization for forecasting tasks by featurization config. Numerical and Categorical as column purpose for forecasting tasks is now supported.
+ + Enabled drop column featurization customization for forecasting tasks by featurization config.
+ + Enabled imputation customization for forecasting tasks by featurization config. Constant value imputation for target column and mean, median, most_frequent, and constant value imputation for training data are now supported.
+ + **azureml-contrib-pipeline-steps**
+ + Accept string compute names to be passed to ParallelRunConfig
+ + **azureml-core**
+ + Added Environment.clone(new_name) API to create a copy of Environment object
+ + Environment.docker.base_dockerfile accepts filepath. If able to resolve a file, the content will be read into base_dockerfile environment property
+ + Automatically reset mutually exclusive values for base_image and base_dockerfile when user manually sets a value in Environment.docker
+ + Added user_managed flag in RSection that indicates whether the environment is managed by user or by AzureML.
+ + Dataset: Fixed dataset download failure if data path containing unicode characters.
+ + Dataset: Improved dataset mount caching mechanism to respect the minimum disk space requirement in Azure Machine Learning Compute, which avoids making the node unusable and causing the job to be canceled.
+ + Dataset: We add an index for the time series column when you access a time series dataset as a pandas dataframe, which is used to speed up access to time series-based data access. Previously, the index was given the same name as the timestamp column, confusing users about which is the actual timestamp column and which is the index. We now don't give any specific name to the index since it should not be used as a column.
+ + Dataset: Fixed dataset authentication issue in sovereign cloud.
+ + Dataset: Fixed `Dataset.to_spark_dataframe` failure for datasets created from Azure PostgreSQL datastores.
+ + **azureml-interpret**
+ + Added global scores to visualization if local importance values are sparse
+ + Updated azureml-interpret to use interpret-community 0.9.*
+ + Fixed issue with downloading explanation that had sparse evaluation data
+ + Added support of sparse format of the explanation object in AutoML
+ + **azureml-pipeline-core**
+ + Support ComputeInstance as compute target in pipelines
+ + **azureml-train-automl-client**
+ + Added additional telemetry around post-training operations.
+ + Fixed the regression in early stopping
+ + Deprecated azureml.dprep.Dataflow as a valid type for input data.
+ + Changing default AutoML experiment time out to six days.
+ + **azureml-train-automl-runtime**
+ + Added additional telemetry around post-training operations.
+ + added sparse AutoML end to end support
+ + **azureml-opendatasets**
+ + Added additional telemetry for service monitor.
+ + Enable front door for blob to increase stability
+
+## 2020-03-23
+
+### Azure Machine Learning SDK for Python v1.2.0
+++ **Breaking changes**
+ + Drop support for Python 2.7
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Adds "--subscription-id" to `az ml model/computetarget/service` commands in the CLI
+ + Adding support for passing customer-managed key(CMK) vault_url, key_name and key_version for ACI deployment
+ + **azureml-automl-core**
+ + Enabled customized imputation with constant value for both X and y data forecasting tasks.
+ + Fixed the issue in with showing error messages to user.
+ + **azureml-automl-runtime**
+ + Fixed the issue in with forecasting on the data sets, containing grains with only one row
+ + Decreased the amount of memory required by the forecasting tasks.
+ + Added better error messages if time column has incorrect format.
+ + Enabled customized imputation with constant value for both X and y data forecasting tasks.
+ + **azureml-core**
+ + Added support for loading ServicePrincipal from environment variables: AZUREML_SERVICE_PRINCIPAL_ID, AZUREML_SERVICE_PRINCIPAL_TENANT_ID, and AZUREML_SERVICE_PRINCIPAL_PASSWORD
+ + Introduced a new parameter `support_multi_line` to `Dataset.Tabular.from_delimited_files`: By default (`support_multi_line=False`), all line breaks, including those in quoted field values, will be interpreted as a record break. Reading data this way is faster and more optimized for parallel execution on multiple CPU cores. However, it may result in silently producing more records with misaligned field values. This should be set to `True` when the delimited files are known to contain quoted line breaks.
+ + Added the ability to register ADLS Gen2 in the Azure Machine Learning CLI
+ + Renamed parameter 'fine_grain_timestamp' to 'timestamp' and parameter 'coarse_grain_timestamp' to 'partition_timestamp' for the with_timestamp_columns() method in TabularDataset to better reflect the usage of the parameters.
+ + Increased max experiment name length to 255.
+ + **azureml-interpret**
+ + Updated azureml-interpret to interpret-community 0.7.*
+ + **azureml-sdk**
+ + Changing to dependencies with compatible version Tilde for the support of patching in pre-release and stable releases.
++
+## 2020-03-11
+
+### Azure Machine Learning SDK for Python v1.1.5
+++ **Feature deprecation**
+ + **Python 2.7**
+ + Last version to support Python 2.7
+++ **Breaking changes**
+ + **Semantic Versioning 2.0.0**
+ + Starting with version 1.1 Azure ML Python SDK adopts [Semantic Versioning 2.0.0](https://semver.org/). All subsequent versions will follow new numbering scheme and semantic versioning contract.
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Change the endpoint CLI command name from 'az ml endpoint aks' to 'az ml endpoint real time' for consistency.
+ + update CLI installation instructions for stable and experimental branch CLI
+ + Single instance profiling was fixed to produce a recommendation and was made available in core sdk.
+ + **azureml-automl-core**
+ + Enabled the Batch mode inference (taking multiple rows once) for AutoML ONNX models
+ + Improved the detection of frequency on the data sets, lacking data or containing irregular data points
+ + Added the ability to remove data points not complying with the dominant frequency.
+ + Changed the input of the constructor to take a list of options to apply the imputation options for corresponding columns.
+ + The error logging has been improved.
+ + **azureml-automl-runtime**
+ + Fixed the issue with the error thrown if the grain was not present in the training set appeared in the test set
+ + Removed the y_query requirement during scoring on forecasting service
+ + Fixed the issue with forecasting when the data set contains short grains with long time gaps.
+ + Fixed the issue when the auto max horizon is turned on and the date column contains dates in form of strings. Proper conversion and error messages were added for when conversion to date is not possible
+ + Using native NumPy and SciPy for serializing and deserializing intermediate data for FileCacheStore (used for local AutoML runs)
+ + Fixed a bug where failed child runs could get stuck in Running state.
+ + Increased speed of featurization.
+ + Fixed the frequency check during scoring, now the forecasting tasks do not require strict frequency equivalence between train and test set.
+ + Changed the input of the constructor to take a list of options to apply the imputation options for corresponding columns.
+ + Fixed errors related to lag type selection.
+ + Fixed the unclassified error raised on the data sets, having grains with the single row
+ + Fixed the issue with frequency detection slowness.
+ + Fixes a bug in AutoML exception handling that caused the real reason for training failure to be replaced by an AttributeError.
+ + **azureml-cli-common**
+ + Single instance profiling was fixed to produce a recommendation and was made available in core sdk.
+ + **azureml-contrib-mir**
+ + Adds functionality in the MirWebservice class to retrieve the Access Token
+ + Use token auth for MirWebservice by default during MirWebservice.run() call - Only refresh if call fails
+ + Mir webservice deployment now requires proper Skus [Standard_DS2_v2, Standard_F16, Standard_A2_v2] instead of [Ds2v2, A2v2, and F16] respectively.
+ + **azureml-contrib-pipeline-steps**
+ + Optional parameter side_inputs added to ParallelRunStep. This parameter can be used to mount folder on the container. Currently supported types are DataReference and PipelineData.
+ + Parameters passed in ParallelRunConfig can be overwritten by passing pipeline parameters now. New pipeline parameters supported aml_mini_batch_size, aml_error_threshold, aml_logging_level, aml_run_invocation_timeout (aml_node_count and aml_process_count_per_node are already part of earlier release).
+ + **azureml-core**
+ + Deployed AzureML Webservices will now default to `INFO` logging. This can be controlled by setting the `AZUREML_LOG_LEVEL` environment variable in the deployed service.
+ + Python sdk uses discovery service to use 'api' endpoint instead of 'pipelines'.
+ + Swap to the new routes in all SDK calls.
+ + Changed routing of calls to the ModelManagementService to a new unified structure.
+ + Made workspace update method publicly available.
+ + Added image_build_compute parameter in workspace update method to allow user updating the compute for image build.
+ + Added deprecation messages to the old profiling workflow. Fixed profiling cpu and memory limits.
+ + Added RSection as part of Environment to run R jobs.
+ + Added validation to `Dataset.mount` to raise error when source of the dataset is not accessible or does not contain any data.
+ + Added `--grant-workspace-msi-access` as an additional parameter for the Datastore CLI for registering Azure Blob Container that will allow you to register Blob Container that is behind a VNet.
+ + Single instance profiling was fixed to produce a recommendation and was made available in core sdk.
+ + Fixed the issue in aks.py _deploy.
+ + Validates the integrity of models being uploaded to avoid silent storage failures.
+ + User may now specify a value for the auth key when regenerating keys for webservices.
+ + Fixed bug where uppercase letters cannot be used as dataset's input name.
+ + **azureml-defaults**
+ + `azureml-dataprep` will now be installed as part of `azureml-defaults`. It is no longer required to install data prep[fuse] manually on compute targets to mount datasets.
+ + **azureml-interpret**
+ + Updated azureml-interpret to interpret-community 0.6.*
+ + Updated azureml-interpret to depend on interpret-community 0.5.0
+ + Added azureml-style exceptions to azureml-interpret
+ + Fixed DeepScoringExplainer serialization for keras models
+ + **azureml-mlflow**
+ + Add support for sovereign clouds to azureml.mlflow
+ + **azureml-pipeline-core**
+ + Pipeline batch scoring notebook now uses ParallelRunStep
+ + Fixed a bug where PythonScriptStep results could be incorrectly reused despite changing the arguments list
+ + Added the ability to set columns' type when calling the parse_* methods on `PipelineOutputFileDataset`
+ + **azureml-pipeline-steps**
+ + Moved the `AutoMLStep` to the `azureml-pipeline-steps` package. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`.
+ + Added documentation example for dataset as PythonScriptStep input
+ + **azureml-tensorboard**
+ + Updated azureml-tensorboard to support TensorFlow 2.0
+ + Show correct port number when using a custom TensorBoard port on a Compute Instance
+ + **azureml-train-automl-client**
+ + Fixed an issue where certain packages may be installed at incorrect versions on remote runs.
+ + fixed FeaturizationConfig overriding issue that filters custom featurization config.
+ + **azureml-train-automl-runtime**
+ + Fixed the issue with frequency detection in the remote runs
+ + Moved the `AutoMLStep` in the `azureml-pipeline-steps` package. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`.
+ + **azureml-train-core**
+ + Supporting PyTorch version 1.4 in the PyTorch Estimator
+
+## 2020-03-02
+
+### Azure Machine Learning SDK for Python v1.1.2rc0 (Pre-release)
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Enabled the Batch mode inference (taking multiple rows once) for AutoML ONNX models
+ + Improved the detection of frequency on the data sets, lacking data or containing irregular data points
+ + Added the ability to remove data points not complying with the dominant frequency.
+ + **azureml-automl-runtime**
+ + Fixed the issue with the error thrown if the grain was not present in the training set appeared in the test set
+ + Removed the y_query requirement during scoring on forecasting service
+ + **azureml-contrib-mir**
+ + Adds functionality in the MirWebservice class to retrieve the Access Token
+ + **azureml-core**
+ + Deployed AzureML Webservices will now default to `INFO` logging. This can be controlled by setting the `AZUREML_LOG_LEVEL` environment variable in the deployed service.
+ + Fix iterating on `Dataset.get_all` to return all datasets registered with the workspace.
+ + Improve error message when invalid type is passed to `path` argument of dataset creation APIs.
+ + Python sdk uses discovery service to use 'api' endpoint instead of 'pipelines'.
+ + Swap to the new routes in all SDK calls
+ + Changes routing of calls to the ModelManagementService to a new unified structure
+ + Made workspace update method publicly available.
+ + Added image_build_compute parameter in workspace update method to allow user updating the compute for image build
+ + Added deprecation messages to the old profiling workflow. Fixed profiling cpu and memory limits
+ + **azureml-interpret**
+ + update azureml-interpret to interpret-community 0.6.*
+ + **azureml-mlflow**
+ + Add support for sovereign clouds to azureml.mlflow
+ + **azureml-pipeline-steps**
+ + Moved the `AutoMLStep` to the `azureml-pipeline-steps package`. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`.
+ + **azureml-train-automl-client**
+ + Fixed an issue where certain packages may be installed at incorrect versions on remote runs.
+ + **azureml-train-automl-runtime**
+ + Fixed the issue with frequency detection in the remote runs
+ + Moved the `AutoMLStep` to the `azureml-pipeline-steps package`. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`.
+ + **azureml-train-core**
+ + Moved the `AutoMLStep` to the `azureml-pipeline-steps package`. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`.
+
+## 2020-02-18
+
+### Azure Machine Learning SDK for Python v1.1.1rc0 (Pre-release)
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Single instance profiling was fixed to produce a recommendation and was made available in core sdk.
+ + **azureml-automl-core**
+ + The error logging has been improved.
+ + **azureml-automl-runtime**
+ + Fixed the issue with forecasting when the data set contains short grains with long time gaps.
+ + Fixed the issue when the auto max horizon is turned on and the date column contains dates in form of strings. We added proper conversion and sensible error if conversion to date is not possible
+ + Using native NumPy and SciPy for serializing and deserializing intermediate data for FileCacheStore (used for local AutoML runs)
+ + Fixed a bug where failed child runs could get stuck in Running state.
+ + **azureml-cli-common**
+ + Single instance profiling was fixed to produce a recommendation and was made available in core sdk.
+ + **azureml-core**
+ + Added `--grant-workspace-msi-access` as an additional parameter for the Datastore CLI for registering Azure Blob Container that will allow you to register Blob Container that is behind a VNet
+ + Single instance profiling was fixed to produce a recommendation and was made available in core sdk.
+ + Fixed the issue in aks.py _deploy
+ + Validates the integrity of models being uploaded to avoid silent storage failures.
+ + **azureml-interpret**
+ + added azureml-style exceptions to azureml-interpret
+ + fixed DeepScoringExplainer serialization for keras models
+ + **azureml-pipeline-core**
+ + Pipeline batch scoring notebook now uses ParallelRunStep
+ + **azureml-pipeline-steps**
+ + Moved the `AutoMLStep` in the `azureml-pipeline-steps` package. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`.
+ + **azureml-contrib-pipeline-steps**
+ + Optional parameter side_inputs added to ParallelRunStep. This parameter can be used to mount folder on the container. Currently supported types are DataReference and PipelineData.
+ + **azureml-tensorboard**
+ + Updated azureml-tensorboard to support TensorFlow 2.0
+ + **azureml-train-automl-client**
+ + Fixed FeaturizationConfig overriding issue that filters custom featurization config.
+ + **azureml-train-automl-runtime**
+ + Moved the `AutoMLStep` in the `azureml-pipeline-steps` package. Deprecated the `AutoMLStep` within `azureml-train-automl-runtime`.
+ + **azureml-train-core**
+ + Supporting PyTorch version 1.4 in the PyTorch Estimator
+
+## 2020-02-04
+
+### Azure Machine Learning SDK for Python v1.1.0rc0 (Pre-release)
+++ **Breaking changes**
+ + **Semantic Versioning 2.0.0**
+ + Starting with version 1.1 Azure ML Python SDK adopts [Semantic Versioning 2.0.0](https://semver.org/). All subsequent versions will follow new numbering scheme and semantic versioning contract.
+
++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + Increased speed of featurization.
+ + Fixed the frequency check during scoring, now in the forecasting tasks we do not require strict frequency equivalence between train and test set.
+ + **azureml-core**
+ + User may now specify a value for the auth key when regenerating keys for webservices.
+ + **azureml-interpret**
+ + Updated azureml-interpret to depend on interpret-community 0.5.0
+ + **azureml-pipeline-core**
+ + Fixed a bug where PythonScriptStep results could be incorrectly reused despite changing the arguments list
+ + **azureml-pipeline-steps**
+ + Added documentation example for dataset as PythonScriptStep input
+ + **azureml-contrib-pipeline-steps**
+ + Parameters passed in ParallelRunConfig can be overwritten by passing pipeline parameters now. New pipeline parameters supported aml_mini_batch_size, aml_error_threshold, aml_logging_level, aml_run_invocation_timeout (aml_node_count and aml_process_count_per_node are already part of earlier release).
+
+## 2020-01-21
+
+### Azure Machine Learning SDK for Python v1.0.85
+++ **New features**
+ + **azureml-core**
+ + Get the current core usage and quota limitation for AmlCompute resources in a given workspace and subscription
+
+ + **azureml-contrib-pipeline-steps**
+ + Enable user to pass tabular dataset as intermediate result from previous step to parallelrunstep
+++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + Removed the requirement of y_query column in the request to the deployed forecasting service.
+ + The 'y_query' was removed from the Dominick's Orange Juice notebook service request section.
+ + Fixed the bug preventing forecasting on the deployed models, operating on data sets with date time columns.
+ + Added Matthews Correlation Coefficient as a classification metric, for both binary and multiclass classification.
+ + **azureml-contrib-interpret**
+ + Removed text explainers from azureml-contrib-interpret as text explanation has been moved to the interpret-text repo that will be released soon.
+ + **azureml-core**
+ + Dataset: usages for file dataset no longer depend on numpy and pandas to be installed in the Python env.
+ + Changed LocalWebservice.wait_for_deployment() to check the status of the local Docker container before trying to ping its health endpoint, greatly reducing the amount of time it takes to report a failed deployment.
+ + Fixed the initialization of an internal property used in LocalWebservice.reload() when the service object is created from an existing deployment using the LocalWebservice() constructor.
+ + Edited error message for clarification.
+ + Added a new method called get_access_token() to AksWebservice that will return AksServiceAccessToken object, which contains access token, refresh after timestamp, expiry on timestamp and token type.
+ + Deprecated existing get_token() method in AksWebservice as the new method returns all of the information this method returns.
+ + Modified output of az ml service get-access-token command. Renamed token to accessToken and refreshBy to refreshAfter. Added expiryOn and tokenType properties.
+ + Fixed get_active_runs
+ + **azureml-explain-model**
+ + updated shap to 0.33.0 and interpret-community to 0.4.*
+ + **azureml-interpret**
+ + updated shap to 0.33.0 and interpret-community to 0.4.*
+ + **azureml-train-automl-runtime**
+ + Added Matthews Correlation Coefficient as a classification metric, for both binary and multiclass classification.
+ + Deprecate preprocess flag from code and replaced with featurization -featurization is on by default
+
+## 2020-01-06
+
+### Azure Machine Learning SDK for Python v1.0.83
+++ **New features**
+ + Dataset: Add two options `on_error` and `out_of_range_datetime` for `to_pandas_dataframe` to fail when data has error values instead of filling them with `None`.
+ + Workspace: Added the `hbi_workspace` flag for workspaces with sensitive data that enables further encryption and disables advanced diagnostics on workspaces. We also added support for bringing your own keys for the associated Azure Cosmos DB instance, by specifying the `cmk_keyvault` and `resource_cmk_uri` parameters when creating a workspace, which creates an Azure Cosmos DB instance in your subscription while provisioning your workspace. To learn more, see the [Azure Cosmos DB section of data encryption article](../concept-data-encryption.md#azure-cosmos-db).
+++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + Fixed a regression that caused a TypeError to be raised when running AutoML on Python versions below 3.5.4.
+ + **azureml-core**
+ + Fixed bug in `datastore.upload_files` were relative path that didn't start with `./` was not able to be used.
+ + Added deprecation messages for all Image class code paths
+ + Fixed Model Management URL construction for Azure China 21Vianet region.
+ + Fixed issue where models using source_dir couldn't be packaged for Azure Functions.
+ + Added an option to [Environment.build_local()](/python/api/azureml-core/azureml.core.environment.environment) to push an image into AzureML workspace container registry
+ + Updated the SDK to use new token library on Azure synapse in a back compatible manner.
+ + **azureml-interpret**
+ + Fixed bug where None was returned when no explanations were available for download. Now raises an exception, matching behavior elsewhere.
+ + **azureml-pipeline-steps**
+ + Disallowed passing `DatasetConsumptionConfig`s to `Estimator`'s `inputs` parameter when the `Estimator` will be used in an `EstimatorStep`.
+ + **azureml-sdk**
+ + Added AutoML client to azureml-sdk package, enabling remote AutoML runs to be submitted without installing the full AutoML package.
+ + **azureml-train-automl-client**
+ + Corrected alignment on console output for AutoML runs
+ + Fixed a bug where incorrect version of pandas may be installed on remote amlcompute.
+
+## 2019-12-23
+
+### Azure Machine Learning SDK for Python v1.0.81
+++ **Bug fixes and improvements**
+ + **azureml-contrib-interpret**
+ + defer shap dependency to interpret-community from azureml-interpret
+ + **azureml-core**
+ + Compute target can now be specified as a parameter to the corresponding deployment config objects. This is specifically the name of the compute target to deploy to, not the SDK object.
+ + Added CreatedBy information to Model and Service objects. May be accessed through.created_by
+ + Fixed ContainerImage.run(), which was not correctly setting up the Docker container's HTTP port.
+ + Make `azureml-dataprep` optional for `az ml dataset register` CLI command
+ + Fixed a bug where `TabularDataset.to_pandas_dataframe` would incorrectly fall back to an alternate reader and print out a warning.
+ + **azureml-explain-model**
+ + defer shap dependency to interpret-community from azureml-interpret
+ + **azureml-pipeline-core**
+ + Added new pipeline step `NotebookRunnerStep`, to run a local notebook as a step in pipeline.
+ + Removed deprecated get_all functions for PublishedPipelines, Schedules, and PipelineEndpoints
+ + **azureml-train-automl-client**
+ + Started deprecation of data_script as an input to AutoML.
++
+## 2019-12-09
+
+### Azure Machine Learning SDK for Python v1.0.79
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Removed featurizationConfig to be logged
+ + Updated logging to log "auto"/"off"/"customized" only.
+ + **azureml-automl-runtime**
+ + Added support for pandas. Series and pandas. Categorical for detecting column data type. Previously only supported numpy.ndarray
+ + Added related code changes to handle categorical dtype correctly.
+ + The forecast function interface was improved: the y_pred parameter was made optional. -The docstrings were improved.
+ + **azureml-contrib-dataset**
+ + Fixed a bug where labeled datasets could not be mounted.
+ + **azureml-core**
+ + Bug fix for `Environment.from_existing_conda_environment(name, conda_environment_name)`. User can create an instance of Environment that is exact replica of the local environment
+ + Changed time series-related Datasets methods to `include_boundary=True` by default.
+ + **azureml-train-automl-client**
+ + Fixed issue where validation results are not printed when show output is set to false.
++
+## 2019-11-25
+
+### Azure Machine Learning SDK for Python v1.0.76
+++ **Breaking changes**
+ + Azureml-Train-AutoML upgrade issues
+ + Upgrading to azureml-train-automl>=1.0.76 from azureml-train-automl<1.0.76 can cause partial installations, causing some AutoML imports to fail. To resolve this, you can run the setup script found at https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/automl_setup.cmd. Or if you are using pip directly you can:
+ + "pip install --upgrade azureml-train-automl"
+ + "pip install --ignore-installed azureml-train-automl-client"
+ + or you can uninstall the old version before upgrading
+ + "pip uninstall azureml-train-automl"
+ + "pip install azureml-train-automl"
+++ **Bug fixes and improvements**
+ + **azureml-automl-runtime**
+ + AutoML will now take into account both true and false classes when calculating averaged scalar metrics for binary classification tasks.
+ + Moved Machine learning and training code in AzureML-AutoML-Core to a new package AzureML-AutoML-Runtime.
+ + **azureml-contrib-dataset**
+ + When calling `to_pandas_dataframe` on a labeled dataset with the download option, you can now specify whether to overwrite existing files or not.
+ + When calling `keep_columns` or `drop_columns` that results in a time series, label, or image column being dropped, the corresponding capabilities will be dropped for the dataset as well.
+ + Fixed an issue with pytorch loader for the object detection task.
+ + **azureml-contrib-interpret**
+ + Removed explanation dashboard widget from azureml-contrib-interpret, changed package to reference the new one in interpret_community
+ + Updated version of interpret-community to 0.2.0
+ + **azureml-core**
+ + Improve performance of `workspace.datasets`.
+ + Added the ability to register Azure SQL Database Datastore using username and password authentication
+ + Fix for loading RunConfigurations from relative paths.
+ + When calling `keep_columns` or `drop_columns` that results in a time series column being dropped, the corresponding capabilities will be dropped for the dataset as well.
+ + **azureml-interpret**
+ + updated version of interpret-community to 0.2.0
+ + **azureml-pipeline-steps**
+ + Documented supported values for `runconfig_pipeline_params` for Azure machine learning pipeline steps.
+ + **azureml-pipeline-core**
+ + Added CLI option to download output in json format for Pipeline commands.
+ + **azureml-train-automl**
+ + Split AzureML-Train-AutoML into two packages, a client package AzureML-Train-AutoML-Client and an ML training package AzureML-Train-AutoML-Runtime
+ + **azureml-train-automl-client**
+ + Added a thin client for submitting AutoML experiments without needing to install any machine learning dependencies locally.
+ + Fixed logging of automatically detected lags, rolling window sizes and maximal horizons in the remote runs.
+ + **azureml-train-automl-runtime**
+ + Added a new AutoML package to isolate machine learning and runtime components from the client.
+ + **azureml-contrib-train-rl**
+ + Added reinforcement learning support in SDK.
+ + Added AmlWindowsCompute support in RL SDK.
++
+## 2019-11-11
+
+### Azure Machine Learning SDK for Python v1.0.74
+
+ + **Preview features**
+ + **azureml-contrib-dataset**
+ + After importing azureml-contrib-dataset, you can call `Dataset.Labeled.from_json_lines` instead of `._Labeled` to create a labeled dataset.
+ + When calling `to_pandas_dataframe` on a labeled dataset with the download option, you can now specify whether to overwrite existing files or not.
+ + When calling `keep_columns` or `drop_columns` that results in a time series, label, or image column being dropped, the corresponding capabilities will be dropped for the dataset as well.
+ + Fixed issues with PyTorch loader when calling `dataset.to_torchvision()`.
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + Added Model Profiling to the preview CLI.
+ + Fixes breaking change in Azure Storage causing AzureML CLI to fail.
+ + Added Load Balancer Type to MLC for AKS types
+ + **azureml-automl-core**
+ + Fixed the issue with detection of maximal horizon on time series, having missing values and multiple grains.
+ + Fixed the issue with failures during generation of cross validation splits.
+ + Replace this section with a message in markdown format to appear in the release notes: -Improved handling of short grains in the forecasting data sets.
+ + Fixed the issue with masking of some user information during logging. -Improved logging of the errors during forecasting runs.
+ + Adding psutil as a conda dependency to the autogenerated yml deployment file.
+ + **azureml-contrib-mir**
+ + Fixes breaking change in Azure Storage causing AzureML CLI to fail.
+ + **azureml-core**
+ + Fixes a bug that caused models deployed on Azure Functions to produce 500s.
+ + Fixed an issue where the amlignore file was not applied on snapshots.
+ + Added a new API amlcompute.get_active_runs that returns a generator for running and queued runs on a given amlcompute.
+ + Added Load Balancer Type to MLC for AKS types.
+ + Added append_prefix bool parameter to download_files in run.py and download_artifacts_from_prefix in artifacts_client. This flag is used to selectively flatten the origin filepath so only the file or folder name is added to the output_directory
+ + Fix deserialization issue for `run_config.yml` with dataset usage.
+ + When calling `keep_columns` or `drop_columns` that results in a time series column being dropped, the corresponding capabilities will be dropped for the dataset as well.
+ + **azureml-interpret**
+ + Updated interpret-community version to 0.1.0.3
+ + **azureml-train-automl**
+ + Fixed an issue where automl_step might not print validation issues.
+ + Fixed register_model to succeed even if the model's environment is missing dependencies locally.
+ + Fixed an issue where some remote runs were not docker enabled.
+ + Add logging of the exception that is causing a local run to fail prematurely.
+ + **azureml-train-core**
+ + Consider resume_from runs in the calculation of automated hyperparameter tuning best child runs.
+ + **azureml-pipeline-core**
+ + Fixed parameter handling in pipeline argument construction.
+ + Added pipeline description and step type yaml parameter.
+ + New yaml format for Pipeline step and added deprecation warning for old format.
+++
+## 2019-11-04
+
+### Web experience
+
+The collaborative workspace landing page at [https://ml.azure.com](https://ml.azure.com) has been enhanced and rebranded as the Azure Machine Learning studio.
+
+From the studio, you can train, test, deploy, and manage Azure Machine Learning assets such as datasets, pipelines, models, endpoints, and more.
+
+Access the following web-based authoring tools from the studio:
+
+| Web-based tool | Description |
+|-|-|-|
+| Notebook VM(preview) | Fully managed cloud-based workstation |
+| [Automated machine learning](../tutorial-first-experiment-automated-ml.md) (preview) | No code experience for automating machine learning model development |
+| [Designer](../concept-designer.md) | Drag-and-drop machine learning modeling tool formerly known as the visual interface |
++
+### Azure Machine Learning designer enhancements
+++ Formerly known as the visual interface ++ 11 new [modules](../component-reference/component-reference.md) including recommenders, classifiers, and training utilities including feature engineering, cross validation, and data transformation.+
+### R SDK
+
+Data scientists and AI developers use the [Azure Machine Learning SDK for R](https://github.com/Azure/azureml-sdk-for-r) to build and run machine learning workflows with Azure Machine Learning.
+
+The Azure Machine Learning SDK for R uses the `reticulate` package to bind to the Python SDK. By binding directly to Python, the SDK for R allows you access to core objects and methods implemented in the Python SDK from any R environment you choose.
+
+Main capabilities of the SDK include:
+++ Manage cloud resources for monitoring, logging, and organizing your machine learning experiments.++ Train models using cloud resources, including GPU-accelerated model training.++ Deploy your models as webservices on Azure Container Instances (ACI) and Azure Kubernetes Service (AKS).+
+See the [package website](https://azure.github.io/azureml-sdk-for-r) for complete documentation.
+
+### Azure Machine Learning integration with Event Grid
+
+Azure Machine Learning is now a resource provider for Event Grid, you can configure machine learning events through the Azure portal or Azure CLI. Users can create events for run completion, model registration, model deployment, and data drift detected. These events can be routed to event handlers supported by Event Grid for consumption. See machine learning event [schema](../../event-grid/event-schema-machine-learning.md) and [tutorial](../how-to-use-event-grid.md) articles for more details.
+
+## 2019-10-31
+
+### Azure Machine Learning SDK for Python v1.0.72
+++ **New features**
+ + Added dataset monitors through the [**azureml-datadrift**](/python/api/azureml-datadrift) package, allowing for monitoring time series datasets for data drift or other statistical changes over time. Alerts and events can be triggered if drift is detected or other conditions on the data are met. See [our documentation](how-to-monitor-datasets.md) for details.
+ + Announcing two new editions (also referred to as a SKU interchangeably) in Azure Machine Learning. With this release, you can now create either a Basic or Enterprise Azure Machine Learning workspace. All existing workspaces will be defaulted to the Basic edition, and you can go to the Azure portal or to the studio to upgrade the workspace anytime. You can create either a Basic or Enterprise workspace from the Azure portal. Read [our documentation](./how-to-manage-workspace.md) to learn more. From the SDK, the edition of your workspace can be determined using the "sku" property of your workspace object.
+ + We have also made enhancements to Azure Machine Learning Compute - you can now view metrics for your clusters (like total nodes, running nodes, total core quota) in Azure Monitor, besides viewing Diagnostic logs for debugging. In addition, you can also view currently running or queued runs on your cluster and details such as the IPs of the various nodes on your cluster. You can view these either in the portal or by using corresponding functions in the SDK or CLI.
+
+ + **Preview features**
+ + We are releasing preview support for disk encryption of your local SSD in Azure Machine Learning Compute. Raise a technical support ticket to get your subscription allow listed to use this feature.
+ + Public Preview of Azure Machine Learning Batch Inference. Azure Machine Learning Batch Inference targets large inference jobs that are not time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It is optimized for high-throughput, fire-and-forget inference over large collections of data.
+ + [**azureml-contrib-dataset**](/python/api/azureml-contrib-dataset)
+ + Enabled functionalities for labeled dataset
+ ```Python
+ import azureml.core
+ from azureml.core import Workspace, Datastore, Dataset
+ import azureml.contrib.dataset
+ from azureml.contrib.dataset import FileHandlingOption, LabeledDatasetTask
+
+ # create a labeled dataset by passing in your JSON lines file
+ dataset = Dataset._Labeled.from_json_lines(datastore.path('path/to/file.jsonl'), LabeledDatasetTask.IMAGE_CLASSIFICATION)
+
+ # download or mount the files in the `image_url` column
+ dataset.download()
+ dataset.mount()
+
+ # get a pandas dataframe
+ from azureml.data.dataset_type_definitions import FileHandlingOption
+ dataset.to_pandas_dataframe(FileHandlingOption.DOWNLOAD)
+ dataset.to_pandas_dataframe(FileHandlingOption.MOUNT)
+
+ # get a Torchvision dataset
+ dataset.to_torchvision()
+ ```
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + CLI now supports model packaging.
+ + Added dataset CLI. For more information: `az ml dataset --help`
+ + Added support for deploying and packaging supported models (ONNX, scikit-learn, and TensorFlow) without an InferenceConfig instance.
+ + Added overwrite flag for service deployment (ACI and AKS) in SDK and CLI. If provided, will overwrite the existing service if service with name already exists. If service doesn't exist, will create new service.
+ + Models can be registered with two new frameworks, Onnx and TensorFlow. - Model registration accepts sample input data, sample output data and resource configuration for the model.
+ + **azureml-automl-core**
+ + Training an iteration would run in a child process only when runtime constraints are being set.
+ + Added a guardrail for forecasting tasks, to check whether a specified max_horizon will cause a memory issue on the given machine or not. If it will, a guardrail message will be displayed.
+ + Added support for complex frequencies like two years and one month. -Added comprehensible error message if frequency cannot be determined.
+ + Add azureml-defaults to auto generated conda env to solve the model deployment failure
+ + Allow intermediate data in Azure Machine Learning Pipeline to be converted to tabular dataset and used in `AutoMLStep`.
+ + Implemented column purpose update for streaming.
+ + Implemented transformer parameter update for Imputer and HashOneHotEncoder for streaming.
+ + Added the current data size and the minimum required data size to the validation error messages.
+ + Updated the minimum required data size for Cross-validation to guarantee a minimum of two samples in each validation fold.
+ + **azureml-cli-common**
+ + CLI now supports model packaging.
+ + Models can be registered with two new frameworks, Onnx and TensorFlow.
+ + Model registration accepts sample input data, sample output data and resource configuration for the model.
+ + **azureml-contrib-gbdt**
+ + fixed the release channel for the notebook
+ + Added a warning for non-AmlCompute compute target that we don't support
+ + Added LightGMB Estimator to azureml-contrib-gbdt package
+ + [**azureml-core**](/python/api/azureml-core)
+ + CLI now supports model packaging.
+ + Add deprecation warning for deprecated Dataset APIs. See Dataset API change notice at https://aka.ms/tabular-dataset.
+ + Change [`Dataset.get_by_id`](/python/api/azureml-core/azureml.core.dataset%28class%29#get-by-id-workspace--id-) to return registration name and version if the dataset is registered.
+ + Fix a bug that ScriptRunConfig with dataset as argument cannot be used repeatedly to submit experiment run.
+ + Datasets retrieved during a run will be tracked and can be seen in the run details page or by calling [`run.get_details()`](/python/api/azureml-core/azureml.core.run%28class%29#get-details--) after the run is complete.
+ + Allow intermediate data in Azure Machine Learning Pipeline to be converted to tabular dataset and used in [`AutoMLStep`](/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automlstep).
+ + Added support for deploying and packaging supported models (ONNX, scikit-learn, and TensorFlow) without an InferenceConfig instance.
+ + Added overwrite flag for service deployment (ACI and AKS) in SDK and CLI. If provided, will overwrite the existing service if service with name already exists. If service doesn't exist, will create new service.
+ + Models can be registered with two new frameworks, Onnx and TensorFlow. Model registration accepts sample input data, sample output data and resource configuration for the model.
+ + Added new datastore for Azure Database for MySQL. Added example for using Azure Database for MySQL in DataTransferStep in Azure Machine Learning Pipelines.
+ + Added functionality to add and remove tags from experiments Added functionality to remove tags from runs
+ + Added overwrite flag for service deployment (ACI and AKS) in SDK and CLI. If provided, will overwrite the existing service if service with name already exists. If service doesn't exist, will create new service.
+ + [**azureml-datadrift**](/python/api/azureml-datadrift)
+ + Moved from `azureml-contrib-datadrift` into `azureml-datadrift`
+ + Added support for monitoring time series datasets for drift and other statistical measures
+ + New methods `create_from_model()` and `create_from_dataset()` to the [`DataDriftDetector`](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector%28class%29) class. The `create()` method will be deprecated.
+ + Adjustments to the visualizations in Python and UI in the Azure Machine Learning studio.
+ + Support weekly and monthly monitor scheduling, in addition to daily for dataset monitors.
+ + Support backfill of data monitor metrics to analyze historical data for dataset monitors.
+ + Various bug fixes
+ + [**azureml-pipeline-core**](/python/api/azureml-pipeline-core)
+ + azureml-dataprep is no longer needed to submit an Azure Machine Learning Pipeline run from the pipeline `yaml` file.
+ + [**azureml-train-automl**](/python/api/azureml-train-automl-runtime/)
+ + Add azureml-defaults to auto generated conda env to solve the model deployment failure
+ + AutoML remote training now includes azureml-defaults to allow reuse of training env for inference.
+ + **azureml-train-core**
+ + Added PyTorch 1.3 support in [`PyTorch`](/python/api/azureml-train-core/azureml.train.dnn.pytorch) estimator
+
+## 2019-10-21
+
+### Visual interface (preview)
+++ The Azure Machine Learning visual interface (preview) has been overhauled to run on [Azure Machine Learning pipelines](../concept-ml-pipelines.md). Pipelines (previously known as experiments) authored in the visual interface are now fully integrated with the core Azure Machine Learning experience.
+ + Unified management experience with SDK assets
+ + Versioning and tracking for visual interface models, pipelines, and endpoints
+ + Redesigned UI
+ + Added batch inference deployment
+ + Added Azure Kubernetes Service (AKS) support for inference compute targets
+ + New Python-step pipeline authoring workflow
+ + New [landing page](https://ml.azure.com) for visual authoring tools
+++ **New modules**
+ + Apply math operation
+ + Apply SQL transformation
+ + Clip values
+ + Summarize data
+ + Import from SQL Database
+
+## 2019-10-14
+
+### Azure Machine Learning SDK for Python v1.0.69
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Limiting model explanations to best run rather than computing explanations for every run. Making this behavior change for local, remote and ADB.
+ + Added support for on-demand model explanations for UI
+ + Added psutil as a dependency of `automl` and included psutil as a conda dependency in amlcompute.
+ + Fixed the issue with heuristic lags and rolling window sizes on the forecasting data sets some series of which can cause linear algebra errors
+ + Added print out for the heuristically determined parameters in the forecasting runs.
+ + **azureml-contrib-datadrift**
+ + Added protection while creating output metrics if dataset level drift is not in the first section.
+ + **azureml-contrib-interpret**
+ + azureml-contrib-explain-model package has been renamed to azureml-contrib-interpret
+ + **azureml-core**
+ + Added API to unregister datasets. `dataset.unregister_all_versions()`
+ + azureml-contrib-explain-model package has been renamed to azureml-contrib-interpret.
+ + **[azureml-core](/python/api/azureml-core)**
+ + Added API to unregister datasets. dataset.[unregister_all_versions()](/python/api/azureml-core/azureml.data.abstract_datastore.abstractdatastore#unregister--).
+ + Added Dataset API to check data changed time. `dataset.data_changed_time`.
+ + Being able to consume `FileDataset` and `TabularDataset` as inputs to `PythonScriptStep`, `EstimatorStep`, and `HyperDriveStep` in Azure Machine Learning Pipeline
+ + Performance of `FileDataset.mount` has been improved for folders with a large number of files
+ + Being able to consume [FileDataset](/python/api/azureml-core/azureml.data.filedataset) and [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) as inputs to [PythonScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep), [EstimatorStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.estimatorstep), and [HyperDriveStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.hyperdrivestep) in the Azure Machine Learning Pipeline.
+ + Performance of FileDataset.[mount()](/python/api/azureml-core/azureml.data.filedataset#mount-mount-point-none-kwargs-) has been improved for folders with a large number of files
+ + Added URL to known error recommendations in run details.
+ + Fixed a bug in run.get_metrics where requests would fail if a run had too many children
+ + Fixed a bug in [run.get_metrics](/python/api/azureml-core/azureml.core.run.run#get-metrics-name-none--recursive-false--run-type-none--populate-false-) where requests would fail if a run had too many children
+ + Added support for authentication on Arcadia cluster.
+ + Creating an Experiment object gets or creates the experiment in the Azure Machine Learning workspace for run history tracking. The experiment ID and archived time are populated in the Experiment object on creation. Example: experiment = Experiment(workspace, "New Experiment") experiment_id = experiment.id archive() and reactivate() are functions that can be called on an experiment to hide and restore the experiment from being shown in the UX or returned by default in a call to list experiments. If a new experiment is created with the same name as an archived experiment, you can rename the archived experiment when reactivating by passing a new name. There can only be one active experiment with a given name. Example: experiment1 = Experiment(workspace, "Active Experiment") experiment1.archive() # Create new active experiment with the same name as the archived. experiment2. = Experiment(workspace, "Active Experiment") experiment1.reactivate(new_name="Previous Active Experiment") The static method list() on Experiment can take a name filter and ViewType filter. ViewType values are "ACTIVE_ONLY", "ARCHIVED_ONLY" and "ALL" Example: archived_experiments = Experiment.list(workspace, view_type="ARCHIVED_ONLY") all_first_experiments = Experiment.list(workspace, name="First Experiment", view_type="ALL")
+ + Support using environment for model deployment, and service update
+ + **azureml-datadrift**
+ + The show attribute of DataDriftDector class won't support optional argument 'with_details' anymore. The show attribute will only present data drift coefficient and data drift contribution of feature columns.
+ + DataDriftDetector attribute 'get_output' behavior changes:
+ + Input parameter start_time, end_time are optional instead of mandatory;
+ + Input specific start_time and/or end_time with a specific run_id in the same invoking will result in value error exception because they are mutually exclusive
+ + By input specific start_time and/or end_time, only results of scheduled runs will be returned;
+ + Parameter 'daily_latest_only' is deprecated.
+ + Support retrieving Dataset-based Data Drift outputs.
+ + **azureml-explain-model**
+ + Renames AzureML-explain-model package to AzureML-interpret, keeping the old package for backwards compatibility for now
+ + fixed `automl` bug with raw explanations set to classification task instead of regression by default on download from ExplanationClient
+ + Add support for `ScoringExplainer` to be created directly using `MimicWrapper`
+ + **azureml-pipeline-core**
+ + Improved performance for large Pipeline creation
+ + **azureml-train-core**
+ + Added TensorFlow 2.0 support in TensorFlow Estimator
+ + **azureml-train-automl**
+ + Creating an [Experiment](/python/api/azureml-core/azureml.core.experiment.experiment) object gets or creates the experiment in the Azure Machine Learning workspace for run history tracking. The experiment ID and archived time are populated in the Experiment object on creation. Example:
+
+ ```python
+ experiment = Experiment(workspace, "New Experiment")
+ experiment_id = experiment.id
+ ```
+ [archive()](/python/api/azureml-core/azureml.core.experiment.experiment#archive--) and [reactivate()](/python/api/azureml-core/azureml.core.experiment.experiment#reactivate-new-name-none-) are functions that can be called on an experiment to hide and restore the experiment from being shown in the UX or returned by default in a call to list experiments. If a new experiment is created with the same name as an archived experiment, you can rename the archived experiment when reactivating by passing a new name. There can only be one active experiment with a given name. Example:
+
+ ```python
+ experiment1 = Experiment(workspace, "Active Experiment")
+ experiment1.archive()
+ # Create new active experiment with the same name as the archived.
+ experiment2 = Experiment(workspace, "Active Experiment")
+ experiment1.reactivate(new_name="Previous Active Experiment")
+ ```
+ The static method [list()](/python/api/azureml-core/azureml.core.experiment.experiment#list-workspace--experiment-name-none--view-type--activeonlytags-none-) on Experiment can take a name filter and ViewType filter. ViewType values are "ACTIVE_ONLY", "ARCHIVED_ONLY" and "ALL". Example:
+
+ ```python
+ archived_experiments = Experiment.list(workspace, view_type="ARCHIVED_ONLY")
+ all_first_experiments = Experiment.list(workspace, name="First Experiment", view_type="ALL")
+ ```
+ + Support using environment for model deployment, and service update.
+ + **[azureml-datadrift](/python/api/azureml-datadrift)**
+ + The show attribute of [DataDriftDetector](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector.datadriftdetector) class won't support optional argument 'with_details' anymore. The show attribute will only present data drift coefficient and data drift contribution of feature columns.
+ + DataDriftDetector function [get_output]python/api/azureml-datadrift/azureml.datadrift.datadriftdetector.datadriftdetector#get-output-start-time-none--end-time-none--run-id-none-) behavior changes:
+ + Input parameter start_time, end_time are optional instead of mandatory;
+ + Input specific start_time and/or end_time with a specific run_id in the same invoking will result in value error exception because they are mutually exclusive;
+ + By input specific start_time and/or end_time, only results of scheduled runs will be returned;
+ + Parameter 'daily_latest_only' is deprecated.
+ + Support retrieving Dataset-based Data Drift outputs.
+ + **azureml-explain-model**
+ + Add support for [ScoringExplainer](/python/api/azureml-interpret/azureml.interpret.scoring.scoring_explainer.scoringexplainer) to be created directly using MimicWrapper
+ + **[azureml-pipeline-core](/python/api/azureml-pipeline-core)**
+ + Improved performance for large Pipeline creation.
+ + **[azureml-train-core](/python/api/azureml-train-core)**
+ + Added TensorFlow 2.0 support in [TensorFlow](/python/api/azureml-train-core/azureml.train.dnn.tensorflow) Estimator.
+ + **[azureml-train-automl](/python/api/azureml-train-automl-runtime/)**
+ + The parent run will no longer be failed when setup iteration failed, as the orchestration already takes care of it.
+ + Added local-docker and local-conda support for AutoML experiments
+ + Added local-docker and local-conda support for AutoML experiments.
++
+## 2019-10-08
+
+### New web experience (preview) for Azure Machine Learning workspaces
+
+The Experiment tab in the [new workspace portal](https://ml.azure.com) has been updated so data scientists can monitor experiments in a more performant way. You can explore the following features:
++ Experiment metadata to easily filter and sort your list of experiments++ Simplified and performant experiment details pages that allow you to visualize and compare your runs++ New design to run details pages to understand and monitor your training runs+
+## 2019-09-30
+
+### Azure Machine Learning SDK for Python v1.0.65
+
+ + **New features**
+ + Added curated environments. These environments have been pre-configured with libraries for common machine learning tasks, and have been pre-build and cached as Docker images for faster execution. They appear by default in Workspace's list of environment, with prefix "AzureML".
+ + Added curated environments. These environments have been pre-configured with libraries for common machine learning tasks, and have been pre-build and cached as Docker images for faster execution. They appear by default in [Workspace](/python/api/azureml-core/azureml.core.workspace%28class%29)'s list of environment, with prefix "AzureML".
+
+ + **azureml-train-automl**
+ + **[azureml-train-automl](/python/api/azureml-train-automl-runtime/)**
+ + Added the ONNX conversion support for the ADB and HDI
+++ **Preview features**
+ + **azureml-train-automl**
+ + **[azureml-train-automl](/python/api/azureml-train-automl-runtime/)**
+ + Supported BERT and BiLSTM as text featurizer (preview only)
+ + Supported featurization customization for column purpose and transformer parameters (preview only)
+ + Supported raw explanations when user enables model explanation during training (preview only)
+ + Added Prophet for `timeseries` forecasting as a trainable pipeline (preview only)
+
+ + **azureml-contrib-datadrift**
+ + Packages relocated from azureml-contrib-datadrift to azureml-datadrift; the `contrib` package will be removed in a future release
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Introduced FeaturizationConfig to AutoMLConfig and AutoMLBaseSettings
+ + Introduced FeaturizationConfig to [AutoMLConfig](/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig) and AutoMLBaseSettings
+ + Override Column Purpose for Featurization with given column and feature type
+ + Override transformer parameters
+ + Added deprecation message for explain_model() and retrieve_model_explanations()
+ + Added Prophet as a trainable pipeline (preview only)
+ + Added deprecation message for explain_model() and retrieve_model_explanations().
+ + Added Prophet as a trainable pipeline (preview only).
+ + Added support for automatic detection of target lags, rolling window size, and maximal horizon. If one of target_lags, target_rolling_window_size or max_horizon is set to 'auto', the heuristics will be applied to estimate the value of corresponding parameter based on training data.
+ + Fixed forecasting in the case when data set contains one grain column, this grain is of a numeric type and there is a gap between train and test set
+ + Fixed the error message about the duplicated index in the remote run in forecasting tasks
+ + Fixed forecasting in the case when data set contains one grain column, this grain is of a numeric type and there is a gap between train and test set.
+ + Fixed the error message about the duplicated index in the remote run in forecasting tasks.
+ + Added a guardrail to check whether a dataset is imbalanced or not. If it is, a guardrail message would be written to the console.
+ + **azureml-core**
+ + Added ability to retrieve SAS URL to model in storage through the model object. Ex: model.get_sas_url()
+ + Introduce `run.get_details()['datasets']` to get datasets associated with the submitted run
+ + Add API `Dataset.Tabular.from_json_lines_files` to create a TabularDataset from JSON Lines files. To learn about this tabular data in JSON Lines files on TabularDataset, visit [this article](how-to-create-register-datasets.md) for documentation.
+ + Added additional VM size fields (OS Disk, number of GPUs) to the supported_vmsizes () function
+ + Added additional fields to the list_nodes () function to show the run, the private and the public IP, the port etc.
+ + Ability to specify a new field during cluster provisioning --remotelogin_port_public_access which can be set to enabled or disabled depending on whether you would like to leave the SSH port open or closed at the time of creating the cluster. If you do not specify it, the service will smartly open or close the port depending on whether you are deploying the cluster inside a VNet.
+ + **azureml-explain-model**
+ + **[azureml-core](/python/api/azureml-core/azureml.core)**
+ + Added ability to retrieve SAS URL to model in storage through the model object. Ex: model.[get_sas_url()](/python/api/azureml-core/azureml.core.model.model#get-sas-urls--)
+ + Introduce run.[get_details](/python/api/azureml-core/azureml.core.run%28class%29#get-details--)['datasets'] to get datasets associated with the submitted run
+ + Add API `Dataset.Tabular`.[from_json_lines_files()](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-json-lines-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) to create a TabularDataset from JSON Lines files. To learn about this tabular data in JSON Lines files on TabularDataset, visithttps://aka.ms/azureml-data for documentation.
+ + Added additional VM size fields (OS Disk, number of GPUs) to the [supported_vmsizes()](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#supported-vmsizes-workspace--location-none-) function
+ + Added additional fields to the [list_nodes()](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#list-nodes--) function to show the run, the private, and the public IP, the port etc.
+ + Ability to specify a new field during cluster [provisioning](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#provisioning-configuration-vm-size--vm-priority--dedicatedmin-nodes-0--max-nodes-none--idle-seconds-before-scaledown-none--admin-username-none--admin-user-password-none--admin-user-ssh-key-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--tags-none--description-none--remote-login-port-public-access--notspecified--) that can be set to enabled or disabled depending on whether you would like to leave the SSH port open or closed at the time of creating the cluster. If you do not specify it, the service will smartly open or close the port depending on whether you are deploying the cluster inside a VNet.
+ + **azureml-explain-model**
+ + Improved documentation for Explanation outputs in the classification scenario.
+ + Added the ability to upload the predicted y values on the explanation for the evaluation examples. Unlocks more useful visualizations.
+ + Added explainer property to MimicWrapper to enable getting the underlying MimicExplainer.
+ + **azureml-pipeline-core**
+ + Added notebook to describe Module, ModuleVersion, and ModuleStep
+ + **azureml-pipeline-steps**
+ + Added RScriptStep to support R script run via AML pipeline.
+ + Fixed metadata parameters parsing in AzureBatchStep that was causing the error message "assignment for parameter SubscriptionId is not specified."
+ + **azureml-train-automl**
+ + Supported training_data, validation_data, label_column_name, weight_column_name as data input format
+ + Added deprecation message for explain_model() and retrieve_model_explanations()
+ + **[azureml-pipeline-core](/python/api/azureml-pipeline-core)**
+ + Added a [notebook](https://aka.ms/pl-modulestep) to describe [Module](/python/api/azureml-pipeline-core/azureml.pipeline.core.module%28class%29), [ModuleVersion, and [ModuleStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.modulestep).
+ + **[azureml-pipeline-steps](/python/api/azureml-pipeline-steps)**
+ + Added [RScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.rscriptstep) to support R script run via AML pipeline.
+ + Fixed metadata parameters parsing in [AzureBatchStep that was causing the error message "assignment for parameter SubscriptionId is not specified".
+ + **[azureml-train-automl](/python/api/azureml-train-automl-runtime/)**
+ + Supported training_data, validation_data, label_column_name, weight_column_name as data input format.
+ + Added deprecation message for [explain_model()](/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automlexplainer#explain-model-fitted-model--x-train--x-test--best-run-none--features-none--y-train-none-kwargs-) and [retrieve_model_explanations()](/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automlexplainer#retrieve-model-explanation-child-run-).
++
+## 2019-09-16
+
+### Azure Machine Learning SDK for Python v1.0.62
+++ **New features**
+ + Introduced the `timeseries` trait on TabularDataset. This trait enables easy timestamp filtering on data a TabularDataset, such as taking all data between a range of time or the most recent data. https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb for an example notebook.
+ + Enabled training with TabularDataset and FileDataset.
+
+ + **azureml-train-core**
+ + Added `Nccl` and `Gloo` support in PyTorch estimator
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Deprecated the AutoML setting 'lag_length' and the LaggingTransformer.
+ + Fixed correct validation of input data if they are specified in a Dataflow format
+ + Modified the fit_pipeline.py to generate the graph json and upload to artifacts.
+ + Rendered the graph under `userrun` using `Cytoscape`.
+ + **azureml-core**
+ + Revisited the exception handling in ADB code and make changes to as per new error handling
+ + Added automatic MSI authentication for Notebook VMs.
+ + Fixes bug where corrupt or empty models could be uploaded because of failed retries.
+ + Fixed the bug where `DataReference` name changes when the `DataReference` mode changes (for example, when calling `as_upload`, `as_download`, or `as_mount`).
+ + Make `mount_point` and `target_path` optional for `FileDataset.mount` and `FileDataset.download`.
+ + Exception that timestamp column cannot be found will be throw out if the time serials-related API is called without fine timestamp column assigned or the assigned timestamp columns are dropped.
+ + Time serials columns should be assigned with column whose type is Date, otherwise exception is expected
+ + Time serials columns assigning API 'with_timestamp_columns' can take None value fine/coarse timestamp column name, which will clear previously assigned timestamp columns.
+ + Exception will be thrown out when either coarse grain or fine grained timestamp column is dropped with indication for user that dropping can be done after either excluding timestamp column in dropping list or call with_time_stamp with None value to release timestamp columns
+ + Exception will be thrown out when either coarse grain or fine grained timestamp column is not included in keep columns list with indication for user that keeping can be done after either including timestamp column in keep column list or call with_time_stamp with None value to release timestamp columns.
+ + Added logging for the size of a registered model.
+ + **azureml-explain-model**
+ + Fixed warning printed to console when "packaging" Python package is not installed: "Using older than supported version of lightgbm, please upgrade to version greater than 2.2.1"
+ + Fixed download model explanation with sharding for global explanations with many features
+ + Fixed mimic explainer missing initialization examples on output explanation
+ + Fixed immutable error on set properties when uploading with explanation client using two different types of models
+ + Added a get_raw param to scoring explainer.explain() so one scoring explainer can return both engineered and raw values.
+ + **azureml-train-automl**
+ + Introduced public APIs from AutoML for supporting explanations from `automl` explain SDK - Newer way of supporting AutoML explanations by decoupling AutoML featurization and explain SDK - Integrated raw explanation support from azureml explain SDK for AutoML models.
+ + Removing azureml-defaults from remote training environments.
+ + Changed default cache store location from FileCacheStore based one to AzureFileCacheStore one for AutoML on Azure Databricks code path.
+ + Fixed correct validation of input data if they are specified in a Dataflow format
+ + **azureml-train-core**
+ + Reverted source_directory_data_store deprecation.
+ + Added ability to override azureml installed package versions.
+ + Added dockerfile support in `environment_definition` parameter in estimators.
+ + Simplified distributed training parameters in estimators.
+
+ ```python
+ from azureml.train.dnn import TensorFlow, Mpi, ParameterServer
+ ```
+
+## 2019-09-09
+
+### New web experience (preview) for Azure Machine Learning workspaces
+The new web experience enables data scientists and data engineers to complete their end-to-end machine learning lifecycle from prepping and visualizing data to training and deploying models in a single location.
+
+![Azure Machine Learning workspace UI (preview)](../media/azure-machine-learning-release-notes/new-ui-for-workspaces.jpg)
+
+**Key features:**
+
+Using this new Azure Machine Learning interface, you can now:
++ Manage your notebooks or link out to Jupyter++ [Run automated ML experiments](../tutorial-first-experiment-automated-ml.md)++ [Create datasets from local files, datastores, & web files](how-to-create-register-datasets.md)++ Explore & prepare datasets for model creation++ Monitor data drift for your models++ View recent resources from a dashboard+
+At the time, of this release, the following browsers are supported: Chrome, Firefox, Safari, and Microsoft Edge Preview.
+
+**Known issues:**
+
+1. Refresh your browser if you see "Something went wrong! Error loading chunk files" when deployment is in progress.
+
+1. Can't delete or rename file in Notebooks and Files. During Public Preview, you can use Jupyter UI or Terminal in Notebook VM to perform update file operations. Because it is a mounted network file system all changes, you make on Notebook VM are immediately reflected in the Notebook Workspace.
+
+1. To SSH into the Notebook VM:
+ 1. Find the SSH keys that were created during VM setup. Or, find the keys in the Azure Machine Learning workspace > open Compute tab > locate Notebook VM in the list > open its properties: copy the keys from the dialog.
+ 1. Import those public and private SSH keys to your local machine.
+ 1. Use them to SSH into the Notebook VM.
+
+## 2019-09-03
+### Azure Machine Learning SDK for Python v1.0.60
+++ **New features**
+ + Introduced FileDataset, which references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute.
+ + Added Pipeline Yaml Support for PythonScript Step, Adla Step, Databricks Step, DataTransferStep, and AzureBatch Step
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + AutoArima is now a suggestable pipeline for preview only.
+ + Improved error reporting for forecasting.
+ + Improved the logging by using custom exceptions instead of generic in the forecasting tasks.
+ + Removed the check on max_concurrent_iterations to be less than total number of iterations.
+ + AutoML models now return AutoMLExceptions
+ + This release improves the execution performance of automated machine learning local runs.
+ + **azureml-core**
+ + Introduce Dataset.get_all(workspace), which returns a dictionary of `TabularDataset` and `FileDataset` objects keyed by their registration name.
+
+ ```python
+ workspace = Workspace.from_config()
+ all_datasets = Dataset.get_all(workspace)
+ mydata = all_datasets['my-data']
+ ```
+
+ + Introduce `parition_format` as argument to `Dataset.Tabular.from_delimited_files` and `Dataset.Tabular.from_parquet.files`. The partition information of each data path will be extracted into columns based on the specified format. '{column_name}' creates string column, and '{column_name:yyyy/MM/dd/HH/mm/ss}' creates datetime column, where 'yyyy', 'MM', 'dd', 'HH', 'mm' and 'ss' are used to extract year, month, day, hour, minute, and second for the datetime type. The partition_format should start from the position of first partition key until the end of file path. For example, given the path '../USA/2019/01/01/data.csv' where the partition is by country and time, partition_format='/{Country}/{PartitionDate:yyyy/MM/dd}/data.csv' creates string column 'Country' with value 'USA' and datetime column 'PartitionDate' with value '2019-01-01'.
+ ```python
+ workspace = Workspace.from_config()
+ all_datasets = Dataset.get_all(workspace)
+ mydata = all_datasets['my-data']
+ ```
+
+ + Introduce `partition_format` as argument to `Dataset.Tabular.from_delimited_files` and `Dataset.Tabular.from_parquet.files`. The partition information of each data path will be extracted into columns based on the specified format. '{column_name}' creates string column, and '{column_name:yyyy/MM/dd/HH/mm/ss}' creates datetime column, where 'yyyy', 'MM', 'dd', 'HH', 'mm' and 'ss' are used to extract year, month, day, hour, minute, and second for the datetime type. The partition_format should start from the position of first partition key until the end of file path. For example, given the path '../USA/2019/01/01/data.csv' where the partition is by country and time, partition_format='/{Country}/{PartitionDate:yyyy/MM/dd}/data.csv' creates string column 'Country' with value 'USA' and datetime column 'PartitionDate' with value '2019-01-01'.
+ + `to_csv_files` and `to_parquet_files` methods have been added to `TabularDataset`. These methods enable conversion between a `TabularDataset` and a `FileDataset` by converting the data to files of the specified format.
+ + Automatically log into the base image registry when saving a Dockerfile generated by Model.package().
+ + 'gpu_support' is no longer necessary; AML now automatically detects and uses the nvidia docker extension when it is available. It will be removed in a future release.
+ + Added support to create, update, and use PipelineDrafts.
+ + This release improves the execution performance of automated machine learning local runs.
+ + Users can query metrics from run history by name.
+ + Improved the logging by using custom exceptions instead of generic in the forecasting tasks.
+ + **azureml-explain-model**
+ + Added feature_maps parameter to the new MimicWrapper, allowing users to get raw feature explanations.
+ + Dataset uploads are now off by default for explanation upload, and can be re-enabled with upload_datasets=True
+ + Added "is_law" filtering parameters to explanation list and download functions.
+ + Adds method `get_raw_explanation(feature_maps)` to both global and local explanation objects.
+ + Added version check to lightgbm with printed warning if below supported version
+ + Optimized memory usage when batching explanations
+ + AutoML models now return AutoMLExceptions
+ + **azureml-pipeline-core**
+ + Added support to create, update, and use PipelineDrafts - can be used to maintain mutable pipeline definitions and use them interactively to run
+ + **azureml-train-automl**
+ + Created feature to install specific versions of gpu-capable pytorch v1.1.0, :::no-loc text="cuda"::: toolkit 9.0, pytorch-transformers, which is required to enable BERT/ XLNet in the remote Python runtime environment.
+ + **azureml-train-core**
+ + Early failure of some hyperparameter space definition errors directly in the sdk instead of server side.
+
+### Azure Machine Learning Data Prep SDK v1.1.14
++ **Bug fixes and improvements**
+ + Enabled writing to ADLS/ADLSGen2 using raw path and credentials.
+ + Fixed a bug that caused `include_path=True` to not work for `read_parquet`.
+ + Fixed `to_pandas_dataframe()` failure caused by exception "Invalid property value: hostSecret".
+ + Fixed a bug where files could not be read on DBFS in Spark mode.
+
+## 2019-08-19
+
+### Azure Machine Learning SDK for Python v1.0.57
++ **New features**
+ + Enabled `TabularDataset` to be consumed by AutomatedML. To learn more about `TabularDataset`, visithttps://aka.ms/azureml/howto/createdatasets.
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + You can now update the TLS/SSL certificate for the scoring endpoint deployed on AKS cluster both for Microsoft generated and customer certificate.
+ + **azureml-automl-core**
+ + Fixed an issue in AutoML were rows with missing labels were not removed properly.
+ + Improved error logging in AutoML; full error messages will now always be written to the log file.
+ + AutoML has updated its package pinning to include `azureml-defaults`, `azureml-explain-model`, and `azureml-dataprep`. AutoML will no longer warn on package mismatches (except for `azureml-train-automl` package).
+ + Fixed an issue in `timeseries` where cv splits are of unequal size causing bin calculation to fail.
+ + When running ensemble iteration for the Cross-Validation training type, if we ended up having trouble downloading the models trained on the entire dataset, we were having an inconsistency between the model weights and the models that were being fed into the voting ensemble.
+ + Fixed the error, raised when training and/or validation labels (y and y_valid) are provided in the form of pandas dataframe but not as numpy array.
+ + Fixed the issue with the forecasting tasks when None was encountered in the Boolean columns of input tables.
+ + Allow AutoML users to drop training series that are not long enough when forecasting. - Allow AutoML users to drop grains from the test set that does not exist in the training set when forecasting.
+ + **azureml-core**
+ + Fixed issue with blob_cache_timeout parameter ordering.
+ + Added external fit and transform exception types to system errors.
+ + Added support for Key Vault secrets for remote runs. Add an `azureml.core.keyvault.Keyvault` class to add, get, and list secrets from the key vault associated with your workspace. Supported operations are:
+ + azureml.core.workspace.Workspace.get_default_keyvault()
+ + azureml.core.keyvault.Keyvault.set_secret(name, value)
+ + azureml.core.keyvault.Keyvault.set_secrets(secrets_dict)
+ + azureml.core.keyvault.Keyvault.get_secret(name)
+ + azureml.core.keyvault.Keyvault.get_secrets(secrets_list)
+ + azureml.core.keyvault.Keyvault.list_secrets()
+ + Additional methods to obtain default keyvault and get secrets during remote run:
+ + azureml.core.workspace.Workspace.get_default_keyvault()
+ + azureml.core.run.Run.get_secret(name)
+ + azureml.core.run.Run.get_secrets(secrets_list)
+ + Added additional override parameters to submit-hyperdrive CLI command.
+ + Improve reliability of API calls be expanding retries to common requests library exceptions.
+ + Add support for submitting runs from a submitted run.
+ + Fixed expiring SAS token issue in FileWatcher, which caused files to stop being uploaded after their initial token had expired.
+ + Supported importing HTTP csv/tsv files in dataset Python SDK.
+ + Deprecated the Workspace.setup() method. Warning message shown to users suggests using create() or get()/from_config() instead.
+ + Added Environment.add_private_pip_wheel(), which enables uploading private custom Python packages `whl`to the workspace and securely using them to build/materialize the environment.
+ + You can now update the TLS/SSL certificate for the scoring endpoint deployed on AKS cluster both for Microsoft generated and customer certificate.
+ + **azureml-explain-model**
+ + Added parameter to add a model ID to explanations on upload.
+ + Added `is_raw` tagging to explanations in memory and upload.
+ + Added pytorch support and tests for azureml-explain-model package.
+ + **azureml-opendatasets**
+ + Support detecting and logging auto test environment.
+ + Added classes to get US population by county and zip.
+ + **azureml-pipeline-core**
+ + Added label property to input and output port definitions.
+ + **azureml-telemetry**
+ + Fixed an incorrect telemetry configuration.
+ + **azureml-train-automl**
+ + Fixed the bug where on setup failure, error was not getting logged in "errors" field for the setup run and hence was not stored in parent run "errors".
+ + Fixed an issue in AutoML were rows with missing labels were not removed properly.
+ + Allow AutoML users to drop training series that are not long enough when forecasting.
+ + Allow AutoML users to drop grains from the test set that does not exist in the training set when forecasting.
+ + Now AutoMLStep passes through `automl` config to backend to avoid any issues on changes or additions of new config parameters.
+ + AutoML Data Guardrail is now in public preview. User will see a Data Guardrail report (for classification/regression tasks) after training and also be able to access it through SDK API.
+ + **azureml-train-core**
+ + Added torch 1.2 support in PyTorch Estimator.
+ + **azureml-widgets**
+ + Improved confusion matrix charts for classification training.
+
+### Azure Machine Learning Data Prep SDK v1.1.12
++ **New features**
+ + Lists of strings can now be passed in as input to `read_*` methods.
+++ **Bug fixes and improvements**
+ + The performance of `read_parquet` has been improved when running in Spark.
+ + Fixed an issue where `column_type_builder` failed in case of a single column with ambiguous date formats.
+
+### Azure portal
++ **Preview Feature**
+ + Log and output file streaming is now available for run details pages. The files will stream updates in real time when the preview toggle is turned on.
+ + Ability to set quota at a workspace level is released in preview. AmlCompute quotas are allocated at the subscription level, but we now allow you to distribute that quota between workspaces and allocate it for fair sharing and governance. Just click on the **Usages+Quotas** blade in the left navigation bar of your workspace and select the **Configure Quotas** tab. You must be a subscription admin to be able to set quotas at the workspace level since this is a cross-workspace operation.
+
+## 2019-08-05
+
+### Azure Machine Learning SDK for Python v1.0.55
+++ **New features**
+ + Token-based authentication is now supported for the calls made to the scoring endpoint deployed on AKS. We will continue to support the current key based authentication and users can use one of these authentication mechanisms at a time.
+ + Ability to register a blob storage that is behind the virtual network (VNet) as a datastore.
+++ **Bug fixes and improvements**
+ + **azureml-automl-core**
+ + Fixes a bug where validation size for CV splits is small and results in bad predicted vs. true charts for regression and forecasting.
+ + The logging of forecasting tasks on the remote runs improved, now user is provided with comprehensive error message if the run was failed.
+ + Fixed failures of `Timeseries` if preprocess flag is True.
+ + Made some forecasting data validation error messages more actionable.
+ + Reduced memory consumption of AutoML runs by dropping and/or lazy loading of datasets, especially in between process spawns
+ + **azureml-contrib-explain-model**
+ + Added model_task flag to explainers to allow user to override default automatic inference logic for model type
+ + Widget changes: Automatically installs with `contrib`, no more `nbextension` install/enable - support explanation with global feature importance (for example, Permutative)
+ + Dashboard changes: - Box plots and violin plots in addition to `beeswarm` plot on summary page - Much faster rerendering of `beeswarm` plot on 'Top -k' slider change - helpful message explaining how top-k is computed - Useful customizable messages in place of charts when data not provided
+ + **azureml-core**
+ + Added Model.package() method to create Docker images and Dockerfiles that encapsulate models and their dependencies.
+ + Updated local webservices to accept InferenceConfigs containing Environment objects.
+ + Fixed Model.register() producing invalid models when '.' (for the current directory) is passed as the model_path parameter.
+ + Add Run.submit_child, the functionality mirrors Experiment.submit while specifying the run as the parent of the submitted child run.
+ + Support configuration options from Model.register in Run.register_model.
+ + Ability to run JAR jobs on existing cluster.
+ + Now supporting instance_pool_id and cluster_log_dbfs_path parameters.
+ + Added support for using an Environment object when deploying a Model to a Webservice. The Environment object can now be provided as a part of the InferenceConfig object.
+ + Add appinsifht mapping for new regions - centralus - westus - northcentralus
+ + Added documentation for all the attributes in all the Datastore classes.
+ + Added blob_cache_timeout parameter to `Datastore.register_azure_blob_container`.
+ + Added save_to_directory and load_from_directory methods to azureml.core.environment.Environment.
+ + Added the "az ml environment download" and "az ml environment register" commands to the CLI.
+ + Added Environment.add_private_pip_wheel method.
+ + **azureml-explain-model**
+ + Added dataset tracking to Explanations using the Dataset service (preview).
+ + Decreased default batch size when streaming global explanations from 10k to 100.
+ + Added model_task flag to explainers to allow user to override default automatic inference logic for model type.
+ + **azureml-mlflow**
+ + Fixed bug in mlflow.azureml.build_image where nested directories are ignored.
+ + **azureml-pipeline-steps**
+ + Added ability to run JAR jobs on existing Azure Databricks cluster.
+ + Added support instance_pool_id and cluster_log_dbfs_path parameters for DatabricksStep step.
+ + Added support for pipeline parameters in DatabricksStep step.
+ + **azureml-train-automl**
+ + Added `docstrings` for the Ensemble related files.
+ + Updated docs to more appropriate language for `max_cores_per_iteration` and `max_concurrent_iterations`
+ + The logging of forecasting tasks on the remote runs improved, now user is provided with comprehensive error message if the run was failed.
+ + Removed get_data from pipeline `automlstep` notebook.
+ + Started support `dataprep` in `automlstep`.
+
+### Azure Machine Learning Data Prep SDK v1.1.10
+++ **New features**
+ + You can now request to execute specific inspectors (for example, histogram, scatter plot, etc.) on specific columns.
+ + Added a parallelize argument to `append_columns`. If True, data will be loaded into memory but execution will run in parallel; if False, execution will be streaming but single-threaded.
+
+## 2019-07-23
+
+### Azure Machine Learning SDK for Python v1.0.53
+++ **New features**
+ + Automated Machine Learning now supports training ONNX models on the remote compute target
+ + Azure Machine Learning now provides ability to resume training from a previous run, checkpoint, or model files.
+ + Learn how to [use estimators to resume training from a previous run](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/tensorflow/train-tensorflow-resume-training/train-tensorflow-resume-training.ipynb)
+++ **Bug fixes and improvements**
+ + **azure-cli-ml**
+ + CLI commands "model deploy" and "service update" now accept parameters, config files, or a combination of the two. Parameters have precedence over attributes in files.
+ + Model description can now be updated after registration
+ + **azureml-automl-core**
+ + Update NimbusML dependency to 1.2.0 version (current latest).
+ + Adding support for NimbusML estimators & pipelines to be used within AutoML estimators.
+ + Fixing a bug in the Ensemble selection procedure that was unnecessarily growing the resulting ensemble even if the scores remained constant.
+ + Enable reuse of some featurizations across CV Splits for forecasting tasks. This speeds up the run-time of the setup run by roughly a factor of n_cross_validations for expensive featurizations like lags and rolling windows.
+ + Addressing an issue if time is out of pandas supported time range. We now raise a DataException if time is less than pd.Timestamp.min or greater than pd.Timestamp.max
+ + Forecasting now allows different frequencies in train and test sets if they can be aligned. For example, "quarterly starting in January" and at "quarterly starting in October" can be aligned.
+ + The property "parameters" was added to the TimeSeriesTransformer.
+ + Remove old exception classes.
+ + In forecasting tasks, the `target_lags` parameter now accepts a single integer value or a list of integers. If the integer was provided, only one lag will be created. If a list is provided, the unique values of lags will be taken. target_lags=[1, 2, 2, 4] will create lags of one, two and four periods.
+ + Fix the bug about losing columns types after the transformation (bug linked);
+ + In `model.forecast(X, y_query)`, allow y_query to be an object type containing None(s) at the begin (#459519).
+ + Add expected values to `automl` output
+ + **azureml-contrib-datadrift**
+ + Improvements to example notebook including switch to azureml-opendatasets instead of azureml-contrib-opendatasets and performance improvements when enriching data
+ + **azureml-contrib-explain-model**
+ + Fixed transformations argument for LIME explainer for raw feature importance in azureml-contrib-explain-model package
+ + Added segmentations to image explanations in image explainer for the AzureML-contrib-explain-model package
+ + Add scipy sparse support for LimeExplainer
+ + Added `batch_size` to mimic explainer when `include_local=False`, for streaming global explanations in batches to improve execution time of DecisionTreeExplainableModel
+ + **azureml-contrib-featureengineering**
+ + Fix for calling set_featurizer_timeseries_params(): dict value type change and null check - Add notebook for `timeseries` featurizer
+ + Update NimbusML dependency to 1.2.0 version (current latest).
+ + **azureml-core**
+ + Added the ability to attach DBFS datastores in the AzureML CLI
+ + Fixed the bug with datastore upload where an empty folder is created if `target_path` started with `/`
+ + Fixed `deepcopy` issue in ServicePrincipalAuthentication.
+ + Added the "az ml environment show" and "az ml environment list" commands to the CLI.
+ + Environments now support specifying a base_dockerfile as an alternative to an already-built base_image.
+ + The unused RunConfiguration setting auto_prepare_environment has been marked as deprecated.
+ + Model description can now be updated after registration
+ + Bugfix: Model and Image delete now provides more information about retrieving upstream objects that depend on them if delete fails due to an upstream dependency.
+ + Fixed bug that printed blank duration for deployments that occur when creating a workspace for some environments.
+ + Improved failure exceptions for workspace creation. Such that users don't see "Unable to create workspace. Unable to find..." as the message and instead see the actual creation failure.
+ + Add support for token authentication in AKS webservices.
+ + Add `get_token()` method to `Webservice` objects.
+ + Added CLI support to manage machine learning datasets.
+ + `Datastore.register_azure_blob_container` now optionally takes a `blob_cache_timeout` value (in seconds) which configures blobfuse's mount parameters to enable cache expiration for this datastore. The default is no timeout, such as when a blob is read, it will stay in the local cache until the job is finished. Most jobs will prefer this setting, but some jobs need to read more data from a large dataset than will fit on their nodes. For these jobs, tuning this parameter will help them succeed. Take care when tuning this parameter: setting the value too low can result in poor performance, as the data used in an epoch may expire before being used again. All reads will be done from blob storage/network rather than the local cache, which negatively impacts training times.
+ + Model description can now properly be updated after registration
+ + Model and Image deletion now provides more information about upstream objects that depend on them, which causes the delete to fail
+ + Improve resource utilization of remote runs using azureml.mlflow.
+ + **azureml-explain-model**
+ + Fixed transformations argument for LIME explainer for raw feature importance in azureml-contrib-explain-model package
+ + add scipy sparse support for LimeExplainer
+ + added shape linear explainer wrapper, as well as another level to tabular explainer for explaining linear models
+ + for mimic explainer in explain model library, fixed error when include_local=False for sparse data input
+ + add expected values to `automl` output
+ + fixed permutation feature importance when transformations argument supplied to get raw feature importance
+ + added `batch_size` to mimic explainer when `include_local=False`, for streaming global explanations in batches to improve execution time of DecisionTreeExplainableModel
+ + for model explainability library, fixed blackbox explainers where pandas dataframe input is required for prediction
+ + Fixed a bug where `explanation.expected_values` would sometimes return a float rather than a list with a float in it.
+ + **azureml-mlflow**
+ + Improve performance of mlflow.set_experiment(experiment_name)
+ + Fix bug in use of InteractiveLoginAuthentication for mlflow tracking_uri
+ + Improve resource utilization of remote runs using azureml.mlflow.
+ + Improve the documentation of the azureml-mlflow package
+ + Patch bug where mlflow.log_artifacts("my_dir") would save artifacts under `my_dir/<artifact-paths>` instead of `<artifact-paths>`
+ + **azureml-opendatasets**
+ + Pin `pyarrow` of `opendatasets` to old versions (<0.14.0) because of memory issue newly introduced there.
+ + Move azureml-contrib-opendatasets to azureml-opendatasets.
+ + Allow open dataset classes to be registered to Azure Machine Learning workspace and leverage AML Dataset capabilities seamlessly.
+ + Improve NoaaIsdWeather enrich performance in non-SPARK version significantly.
+ + **azureml-pipeline-steps**
+ + DBFS Datastore is now supported for Inputs and Outputs in DatabricksStep.
+ + Updated documentation for Azure Batch Step with regard to inputs/outputs.
+ + In AzureBatchStep, changed *delete_batch_job_after_finish* default value to *true*.
+ + **azureml-telemetry**
+ + Move azureml-contrib-opendatasets to azureml-opendatasets.
+ + Allow open dataset classes to be registered to Azure Machine Learning workspace and leverage AML Dataset capabilities seamlessly.
+ + Improve NoaaIsdWeather enrich performance in non-SPARK version significantly.
+ + **azureml-train-automl**
+ + Updated documentation on get_output to reflect the actual return type and provide additional notes on retrieving key properties.
+ + Update NimbusML dependency to 1.2.0 version (current latest).
+ + add expected values to `automl` output
+ + **azureml-train-core**
+ + Strings are now accepted as compute target for Automated Hyperparameter Tuning
+ + The unused RunConfiguration setting auto_prepare_environment has been marked as deprecated.
+
+### Azure Machine Learning Data Prep SDK v1.1.9
+++ **New features**
+ + Added support for reading a file directly from an http or https url.
+++ **Bug fixes and improvements**
+ + Improved error message when attempting to read a Parquet Dataset from a remote source (which is not currently supported).
+ + Fixed a bug when writing to Parquet file format in ADLS Gen 2, and updating the ADLS Gen 2 container name in the path.
+
+## 2019-07-09
+
+### Visual Interface
++ **Preview features**
+ + Added "Execute R script" module in visual interface.
+
+### Azure Machine Learning SDK for Python v1.0.48
+++ **New features**
+ + **azureml-opendatasets**
+ + **azureml-contrib-opendatasets** is now available as **azureml-opendatasets**. The old package can still work, but we recommend you using **azureml-opendatasets** moving forward for richer capabilities and improvements.
+ + This new package allows you to register open datasets as Dataset in Azure Machine Learning workspace, and leverage whatever functionalities that Dataset offers.
+ + It also includes existing capabilities such as consuming open datasets as Pandas/SPARK dataframes, and location joins for some dataset like weather.
+++ **Preview features**
+ + HyperDriveConfig can now accept pipeline object as a parameter to support hyperparameter tuning using a pipeline.
+++ **Bug fixes and improvements**
+ + **azureml-train-automl**
+ + Fixed the bug about losing columns types after the transformation.
+ + Fixed the bug to allow y_query to be an object type containing None(s) at the beginning.
+ + Fixed the issue in the Ensemble selection procedure that was unnecessarily growing the resulting ensemble even if the scores remained constant.
+ + Fixed the issue with allow list_models and block list_models settings in AutoMLStep.
+ + Fixed the issue that prevented the usage of preprocessing when AutoML would have been used in the context of Azure ML Pipelines.
+ + **azureml-opendatasets**
+ + Moved azureml-contrib-opendatasets to azureml-opendatasets.
+ + Allowed open dataset classes to be registered to Azure Machine Learning workspace and leverage AML Dataset capabilities seamlessly.
+ + Improved NoaaIsdWeather enrich performance in non-SPARK version significantly.
+ + **azureml-explain-model**
+ + Updated online documentation for interpretability objects.
+ + Added `batch_size` to mimic explainer when `include_local=False`, for streaming global explanations in batches to improve execution time of DecisionTreeExplainableModel for model explainability library.
+ + Fixed the issue where `explanation.expected_values` would sometimes return a float rather than a list with a float in it.
+ + Added expected values to `automl` output for mimic explainer in explain model library.
+ + Fixed permutation feature importance when transformations argument supplied to get raw feature importance.
+ + **azureml-core**
+ + Added the ability to attach DBFS datastores in the AzureML CLI.
+ + Fixed the issue with datastore upload where an empty folder is created if `target_path` started with `/`.
+ + Enabled comparison of two datasets.
+ + Model and Image delete now provides more information about retrieving upstream objects that depend on them if delete fails due to an upstream dependency.
+ + Deprecated the unused RunConfiguration setting in auto_prepare_environment.
+ + **azureml-mlflow**
+ + Improved resource utilization of remote runs that use azureml.mlflow.
+ + Improved the documentation of the azureml-mlflow package.
+ + Fixed the issue where mlflow.log_artifacts("my_dir") would save artifacts under "my_dir/artifact-paths" instead of "artifact-paths".
+ + **azureml-pipeline-core**
+ + Parameter hash_paths for all pipeline steps is deprecated and will be removed in future. By default contents of the source_directory is hashed (except files listed in `.amlignore` or `.gitignore`)
+ + Continued improving Module and ModuleStep to support compute type-specific modules, to prepare for RunConfiguration integration and other changes to unlock compute type-specific module usage in pipelines.
+ + **azureml-pipeline-steps**
+ + AzureBatchStep: Improved documentation with regard to inputs/outputs.
+ + AzureBatchStep: Changed delete_batch_job_after_finish default value to true.
+ + **azureml-train-core**
+ + Strings are now accepted as compute target for Automated Hyperparameter Tuning.
+ + Deprecated the unused RunConfiguration setting in auto_prepare_environment.
+ + Deprecated parameters `conda_dependencies_file_path` and `pip_requirements_file_path` in favor of `conda_dependencies_file` and `pip_requirements_file` respectively.
+ + **azureml-opendatasets**
+ + Improve NoaaIsdWeather enrich performance in non-SPARK version significantly.
+
+## 2019-04-26
+
+### Azure Machine Learning SDK for Python v1.0.33 released.
+++ Azure ML Hardware Accelerated Models on [FPGAs](how-to-deploy-fpga-web-service.md) is generally available.
+ + You can now [use the azureml-accel-models package](how-to-deploy-fpga-web-service.md) to:
+ + Train the weights of a supported deep neural network (ResNet 50, ResNet 152, DenseNet-121, VGG-16, and SSD-VGG)
+ + Use transfer learning with the supported DNN
+ + Register the model with Model Management Service and containerize the model
+ + Deploy the model to an Azure VM with an FPGA in an Azure Kubernetes Service (AKS) cluster
+ + Deploy the container to an [Azure Data Box Edge](../../databox-online/azure-stack-edge-overview.md) server device
+ + Score your data with the gRPC endpoint with this [sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models)
+
+### Automated Machine Learning
+++ Feature sweeping to enable dynamically adding :::no-loc text="featurizers"::: for performance optimization. New :::no-loc text="featurizers":::: work embeddings, weight of evidence, target encodings, text target encoding, cluster distance++ Smart CV to handle train/valid splits inside automated ML++ Few memory optimization changes and runtime performance improvement++ Performance improvement in model explanation++ ONNX model conversion for local run++ Added Subsampling support++ Intelligent Stopping when no exit criteria defined++ Stacked ensembles+++ Time Series Forecasting
+ + New predict forecast function
+ + You can now use rolling-origin cross validation on time series data
+ + New functionality added to configure time series lags
+ + New functionality added to support rolling window aggregate features
+ + New Holiday detection and featurizer when country code is defined in experiment settings
+++ Azure Databricks
+ + Enabled time series forecasting and model explainabilty/interpretability capability
+ + You can now cancel and resume (continue) automated ML experiments
+ + Added support for multicore processing
+
+### MLOps
++ **Local deployment & debugging for scoring containers**<br/> You can now deploy an ML model locally and iterate quickly on your scoring file and dependencies to ensure they behave as expected.+++ **Introduced InferenceConfig & Model.deploy()**<br/> Model deployment now supports specifying a source folder with an entry script, the same as a RunConfig. Additionally, model deployment has been simplified to a single command.+++ **Git reference tracking**<br/> Customers have been requesting basic Git integration capabilities for some time as it helps maintain a complete audit trail. We have implemented tracking across major entities in Azure ML for Git-related metadata (repo, commit, clean state). This information will be collected automatically by the SDK and CLI.+++ **Model profiling & validation service**<br/> Customers frequently complain of the difficulty to properly size the compute associated with their inference service. With our model profiling service, the customer can provide sample inputs and we will profile across 16 different CPU / memory configurations to determine optimal sizing for deployment.+++ **Bring your own base image for inference**<br/> Another common complaint was the difficulty in moving from experimentation to inference RE sharing dependencies. With our new base image sharing capability, you can now reuse your experimentation base images, dependencies and all, for inference. This should speed up deployments and reduce the gap from the inner to the outer loop.+++ **Improved Swagger schema generation experience**<br/> Our previous swagger generation method was error prone and impossible to automate. We have a new in-line way of generating swagger schemas from any Python function via decorators. We have open-sourced this code and our schema generation protocol is not coupled to the Azure ML platform.+++ **Azure ML CLI is generally available (GA)**<br/> Models can now be deployed with a single CLI command. We got common customer feedback that no one deploys an ML model from a Jupyter notebook. The [**CLI reference documentation**](reference-azure-machine-learning-cli.md) has been updated.++
+## 2019-04-22
+
+Azure Machine Learning SDK for Python v1.0.30 released.
+
+The [`PipelineEndpoint`](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline_endpoint.pipelineendpoint) was introduced to add a new version of a published pipeline while maintaining same endpoint.
+
+## 2019-04-15
+
+### Azure portal
+ + You can now resubmit an existing Script run on an existing remote compute cluster.
+ + You can now run a published pipeline with new parameters on the Pipelines tab.
+ + Run details now supports a new Snapshot file viewer. You can view a snapshot of the directory when you submitted a specific run. You can also download the notebook that was submitted to start the run.
+ + You can now cancel parent runs from the Azure portal.
+
+## 2019-04-08
+
+### Azure Machine Learning SDK for Python v1.0.23
+++ **New features**
+ + The Azure Machine Learning SDK now supports Python 3.7.
+ + Azure Machine Learning DNN Estimators now provide built-in multi-version support. For example,
+ `TensorFlow` estimator now accepts a `framework_version` parameter, and users can specify
+ version '1.10' or '1.12'. For a list of the versions supported by your current SDK release, call
+ `get_supported_versions()` on the desired framework class (for example, `TensorFlow.get_supported_versions()`).
+ For a list of the versions supported by the latest SDK release, see the [DNN Estimator documentation](/python/api/azureml-train-core/azureml.train.dnn).
+
+## 2019-03-25
+
+### Azure Machine Learning SDK for Python v1.0.21
+++ **New features**
+ + The *azureml.core.Run.create_children* method allows low-latency creation of multiple child-runs with a single call.
+
+## 2019-03-11
+
+### Azure Machine Learning SDK for Python v1.0.18
+
+ + **Changes**
+ + The azureml-tensorboard package replaces azureml-contrib-tensorboard.
+ + With this release, you can set up a user account on your managed compute cluster (amlcompute), while creating it. This can be done by passing these properties in the provisioning configuration. You can find more details in the [SDK reference documentation](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#provisioning-configuration-vm-size--vm-priority--dedicatedmin-nodes-0--max-nodes-none--idle-seconds-before-scaledown-none--admin-username-none--admin-user-password-none--admin-user-ssh-key-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--tags-none--description-none--remote-login-port-public-access--notspecified--).
+
+### Azure Machine Learning Data Prep SDK v1.0.17
+++ **New features**
+ + Now supports adding two numeric columns to generate a resultant column using the expression language.
+++ **Bug fixes and improvements**
+ + Improved the documentation and parameter checking for random_split.
+
+## 2019-02-27
+
+### Azure Machine Learning Data Prep SDK v1.0.16
+++ **Bug fix**
+ + Fixed a Service Principal authentication issue that was caused by an API change.
+
+## 2019-02-25
+
+### Azure Machine Learning SDK for Python v1.0.17
+++ **New features**
+ + Azure Machine Learning now provides first class support for popular DNN framework Chainer. Using [`Chainer`](/python/api/azureml-train-core/azureml.train.dnn.chainer) class users can easily train and deploy Chainer models.
+ + Learn how to [run distributed training with ChainerMN](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/chainer/distributed-chainer/distributed-chainer.ipynb)
+ + Learn how to [run hyperparameter tuning with Chainer using HyperDrive](https://github.com/Azure/MachineLearningNotebooks/blob/b881f78e4658b4e102a72b78dbd2129c24506980/how-to-use-azureml/ml-frameworks/chainer/deployment/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb)
+ + Azure Machine Learning Pipelines added ability to trigger a Pipeline run based on datastore modifications. The pipeline [schedule notebook](https://aka.ms/pl-schedule) is updated to showcase this feature.
+++ **Bug fixes and improvements**
+ + We have added support in Azure Machine Learning pipelines for setting the source_directory_data_store property to a desired datastore (such as a blob storage) on [RunConfigurations](/python/api/azureml-core/azureml.core.runconfig.runconfiguration) that are supplied to the [PythonScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep). By default Steps use Azure File store as the backing datastore, which may run into throttling issues when a large number of steps are executed concurrently.
+
+### Azure portal
+++ **New features**
+ + New drag and drop table editor experience for reports. Users can drag a column from the well to the table area where a preview of the table will be displayed. The columns can be rearranged.
+ + New Logs file viewer
+ + Links to experiment runs, compute, models, images, and deployments from the activities tab
+
+## Next steps
+
+Read the overview for [Azure Machine Learning](../overview-what-is-azure-machine-learning.md).
marketplace Azure Vm Plan Pricing And Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-pricing-and-availability.md
These steps assume you have already selected either the _Flat rate_, _Per core_,
1. To offer a 3-year discount, select the **3-year saving %** check box and then enter the percentage discount you want to offer. 1. To see the discounted prices, select **Price per core size**. A table with the 1-year and 3-year prices for each core size is shown. These prices are calculated based on the number of hours in the term with the percentage discount subtracted.
- > [!TIP]
- > For Per core size plans, you can optionally change the price for a particular core size in the **Price/hour** column of the table.
-
+1. > [!TIP]
+ > For Per core size plans, you can optionally change the price for a particular core size in the **Price/hour** column of the table.
+
1. Make sure to select **Save draft** before you leave the page. The changes are applied once you publish the offer. ## Free trial
You can design each plan to be visible to everyone or only to a preselected priv
> [!NOTE] > A private audience is different from the preview audience that you defined on the **Preview audience** pane. A preview audience can access and view all private and public plans for validation purposes before it's published live to Azure Marketplace. A private audience can only access the specific plans that they are authorized to have access to once the offer is live.
+> [!IMPORTANT]
+> Private plans are still visible to everyone in the CLI, but only deployable to customers configured in the private audience.
+ Private offers aren't supported with Azure subscriptions established through a reseller of the Cloud Solution Provider program (CSP). ## Hide plan
migrate Common Questions Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md
ms. Previously updated : 10/05/2022 Last updated : 11/22/2022 # Azure Migrate appliance: Common questions
The appliance can be deployed using a couple of methods:
## How does the appliance connect to Azure?
-The appliance can connect via the internet or by using Azure ExpressRoute.
+The appliance can connect to Azure using public or private networks or using Azure ExpressRoute.
- Make sure the appliance can connect to these [Azure URLs](./migrate-appliance.md#url-access). - You can use ExpressRoute with Microsoft peering. Public peering is deprecated, and isn't available for new ExpressRoute circuits.-- Private peering only isn't supported. ## Does appliance analysis affect performance?
network-watcher Azure Monitor Agent With Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/azure-monitor-agent-with-connection-monitor.md
description: This article describes how to monitor network connectivity in Conne
-+ Previously updated : 09/11/2022 Last updated : 10/27/2022 #Customer intent: I need to monitor a connection by using Azure Monitor Agent.
Connection Monitor relies on lightweight executable files to run connectivity ch
### Agents for Azure virtual machines and scale sets
-To install agents for Azure virtual machines and virtual machine scale sets, see the "Agents for Azure virtual machines and virtual machine scale sets" section of [Monitor network connectivity by using Connection Monitor](connection-monitor-overview.md#agents-for-azure-virtual-machines-and-virtual-machine-scale-sets).
+To install agents for Azure virtual machines and Virtual Machine Scale Sets, see the "Agents for Azure virtual machines and Virtual Machine Scale Sets" section of [Monitor network connectivity by using Connection Monitor](connection-monitor-overview.md#agents-for-azure-virtual-machines-and-virtual-machine-scale-sets).
### Agents for on-premises machines
network-watcher Connection Monitor Connected Machine Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-connected-machine-agent.md
description: This article describes how to install Azure Connected Machine agent
-+ Previously updated : 09/11/2022 Last updated : 10/27/2022 #Customer intent: I need to monitor a connection by using Azure Monitor Agent.
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-portal.md
- Previously updated : 11/23/2020+ Last updated : 11/05/2022 #Customer intent: I need to create a connection monitor to monitor communication between one VM and another.
This article describes how to create a monitor in Connection Monitor by using th
> To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new connection monitor in Azure Network Watcher before February 19, 2024. > [!IMPORTANT]
-> Connection Monitor supports end-to-end connectivity checks from and to Azure virtual machine scale sets. These checks enable faster performance monitoring and network troubleshooting across scale sets.
+> Connection Monitor supports end-to-end connectivity checks from and to Azure Virtual Machine Scale Sets. These checks enable faster performance monitoring and network troubleshooting across scale sets.
## Before you begin
-In monitors that you create by using Connection Monitor, you can add on-premises machines, Azure virtual machines (VMs), and Azure virtual machine scale sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
+In monitors that you create by using Connection Monitor, you can add on-premises machines, Azure virtual machines (VMs), and Azure Virtual Machine Scale Sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
Here are some definitions to get you started:
In the Azure portal, to create a test group in a connection monitor, specify val
* **Test group Name**: Enter the name of your test group. * **Sources**: Select **Add sources** to specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
- * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or virtual machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and virtual machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
+ * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or Virtual Machine Scale Sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and Virtual Machine Scale Sets are grouped into the subscription that they belong to. These groups are collapsed.
You can drill down to further levels in the hierarchy from the **Subscription** level:
In the Azure portal, to create a test group in a connection monitor, specify val
When you select a virtual network, subnet, a single VM, or a virtual machine scale set, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
- :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the 'Add Sources' pane and the Azure endpoints, including the 'Virtual machine scale sets' tab in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the 'Add Sources' pane and the Azure endpoints, including the 'Virtual Machine Scale Sets' tab in Connection Monitor.":::
* To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. Select from a list of on-premises hosts with a Log Analytics agent installed. Select **Arc Endpoint** as the **Type**, and select the subscriptions from the **Subscription** dropdown list. The list of hosts that have the [Azure Arc endpoint](azure-monitor-agent-with-connection-monitor.md) extension and the [Azure Monitor Agent extension](connection-monitor-install-azure-monitor-agent.md) enabled are displayed.
In the Azure portal, to create a test group in a connection monitor, specify val
:::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection Monitor."::: * **Test Groups**: You can add one or more test groups to a connection monitor. These test groups can consist of multiple Azure or non-Azure endpoints.
- * For selected Azure VMs or Azure virtual machine scale sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the npm solution for non-Azure endpoints will be auto enabled after the creation of the connection monitor begins.
+ * For selected Azure VMs or Azure Virtual Machine Scale Sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the npm solution for non-Azure endpoints will be auto enabled after the creation of the connection monitor begins.
* If the selected virtual machine scale set is set for a manual upgrade, you'll have to upgrade the scale set after Network Watcher extension installation to continue setting up the connection monitor with virtual machine scale set as endpoints. If the virtual machine scale set is set to auto upgrade, you don't need to worry about any upgrading after the Network Watcher extension is installed.
- * In the previously mentioned scenario, you can consent to an auto upgrade of a virtual machine scale set with auto enabling of the Network Watcher extension during the creation of the connection monitor for virtual machine scale sets with manual upgrading. This would eliminate your having to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
+ * In the previously mentioned scenario, you can consent to an auto upgrade of a virtual machine scale set with auto enabling of the Network Watcher extension during the creation of the connection monitor for Virtual Machine Scale Sets with manual upgrading. This would eliminate your having to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
:::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up test groups and consent for auto-upgrading of a virtual machine scale set in the connection monitor."::: * **Disable test group**: You can select this checkbox to disable monitoring for all sources and destinations that the test group specifies. This checkbox is cleared by default.
network-watcher Connection Monitor Create Using Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-template.md
na Previously updated : 01/07/2021 Last updated : 02/08/2021+ #Customer intent: I need to create a connection monitor to monitor communication between one VM and another. # Create a Connection Monitor using the ARM template > [!IMPORTANT]
-> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+> Starting 1 July 2021, you'll not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You'll also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
Learn how to create Connection Monitor to monitor communication between your resources using the ARMClient. It supports hybrid and Azure cloud deployments.
address: '<URL>'
port: '<port of choice>'
- preferHTTPS: true // If port chosen is not 80 or 443
+ preferHTTPS: true // If port chosen isn't 80 or 443
method: 'GET', //Choose GET or POST
armclient PUT $ARM/$SUB/$NW/connectionMonitors/$connectionMonitorName/?api-versi
* Endpoints * name ΓÇô Unique name for each endpoint
- * resourceId ΓÇô For Azure endpoints, resource ID refers to the Azure Resource Manager resource ID for virtual machines.For non-Azure endpoints, resource ID refers to the Azure Resource Manager resource ID for the Log Analytics workspace linked to non-Azure agents.
- * address ΓÇô Applicable only when either resource ID is not specified or if resource ID is Log Analytics workspace. If used with Log Analytics resource ID, this refers to the FQDN of the agent that can be used for monitoring. If used without resource ID, this can be the URL or IP of any public endpoint.
- * filter ΓÇô For non-Azure endpoints, use filter to select agents from Log Analytics workspace that will be used for monitoring in Connection monitor resource. If filters are not set, all agents belonging to the Log Analytics workspace can be used for monitoring
+ * resourceId ΓÇô For Azure endpoints, resource ID refers to the Azure Resource Manager resource ID for virtual machines. For non-Azure endpoints, resource ID refers to the Azure Resource Manager resource ID for the Log Analytics workspace linked to non-Azure agents.
+ * address ΓÇô Applicable only when either resource ID isn't specified or if resource ID is Log Analytics workspace. If used with Log Analytics resource ID, this refers to the FQDN of the agent that can be used for monitoring. If used without resource ID, this can be the URL or IP of any public endpoint.
+ * filter ΓÇô For non-Azure endpoints, use filter to select agents from Log Analytics workspace that will be used for monitoring in Connection monitor resource. If filters aren't set, all agents belonging to the Log Analytics workspace can be used for monitoring
* type ΓÇô Set type as ΓÇ£Agent AddressΓÇ¥ * address ΓÇô Set address as the FQDN of your on-premises agent
armclient PUT $ARM/$SUB/$NW/connectionMonitors/$connectionMonitorName/?api-versi
* preferHTTPS - Specify whether to use HTTPS over HTTP, when port used is neither 80 nor 443 * port - Specify the destination port of your choice.
- * disableTraceRoute - This applies to test configurations whose protocol is TCP or ICMP. It stop sources from discovering topology and hop-by-hop RTT.
+ * disableTraceRoute - This applies to test configurations whose protocol is TCP or ICMP. It stops sources from discovering topology and hop-by-hop RTT.
* method - This applied to test configurations whose protocol is HTTP. Select the HTTP request method--either GET or POST * path - Specify path parameters to append to URL
- * validStatusCodes - Choose applicable status codes. If response code does not match this list, you will get a diagnostic message
+ * validStatusCodes - Choose applicable status codes. If response code doesn't match this list, you'll get a diagnostic message
* requestHeaders - Specify custom request header strings that will make be passed to the destination * successThreshold - You can set thresholds on the following network parameters:
network-watcher Connection Monitor Install Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-install-azure-monitor-agent.md
Previously updated : 09/11/2022 Last updated : 10/25/2022 #Customer intent: I need to monitor a connection by using Azure Monitor Agent.
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-overview.md
editor: '' tags: azure-resource-manager- na Previously updated : 01/04/2021 Last updated : 10/04/2022 -+ #Customer intent: I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem. # Monitor network connectivity by using Connection Monitor
network-watcher Connection Monitor Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-schema.md
documentationcenter: na - na Previously updated : 07/05/2021 Last updated : 08/14/2021 + # Azure Network Watcher Connection Monitor schemas
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
Title: Tutorial - Monitor network communication between two virtual machine scale sets by using the Azure portal
-description: In this tutorial, you'll learn how to monitor network communication between two virtual machine scale sets by using the Azure Network Watcher connection monitor capability.
+ Title: Tutorial - Monitor network communication between two Virtual Machine Scale Sets by using the Azure portal
+description: In this tutorial, you'll learn how to monitor network communication between two Virtual Machine Scale Sets by using the Azure Network Watcher connection monitor capability.
documentationcenter: na editor: '' tags: azure-resource-manager # Customer intent: I need to monitor communication between a virtual machine scale set and another VM. If the communication fails, I need to know why, so that I can resolve the problem. - na Previously updated : 05/24/2022 Last updated : 10/17/2022 -+
-# Tutorial: Monitor network communication between two virtual machine scale sets by using the Azure portal
+# Tutorial: Monitor network communication between two Virtual Machine Scale Sets by using the Azure portal
Successful communication between a virtual machine scale set and an endpoint, such as another virtual machine (VM), can be critical for your organization. Sometimes, the introduction of configuration changes can break communication. In this tutorial, you learn how to:
First, create a public standard load balancer by using the Azure portal. The nam
You can deploy a scale set with a Windows Server image or Linux images such as RHEL, CentOS, Ubuntu, or SLES.
-1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**.
-1. On the **Virtual machine scale sets** pane, select **Create**.
+1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual Machine Scale Sets**.
+1. On the **Virtual Machine Scale Sets** pane, select **Create**.
The **Create a virtual machine scale set** page opens. 1. On the **Basics** pane, under **Project details**, ensure that the correct subscription is selected, and then select **myVMSSResourceGroup** in the resource group list.
In the Azure portal, to create a test group in a connection monitor, do the foll
1. **Name**: Name your test group. 1. **Sources**: You can specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
- * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or virtual machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and virtual machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
+ * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or Virtual Machine Scale Sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and Virtual Machine Scale Sets are grouped into the subscription that they belong to. These groups are collapsed.
You can drill down from the **Subscription** level to other levels in the hierarchy:
In the Azure portal, to create a test group in a connection monitor, do the foll
1. **Test Groups**: You can add one or more Test Groups to a Connection Monitor. These test groups can consist of multiple Azure or non-Azure endpoints.
- For selected Azure VMs or Azure virtual machine scale sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the Network Performance Monitor solution for non-Azure endpoints will be auto-enabled after the creation of Connection Monitor begins.
+ For selected Azure VMs or Azure Virtual Machine Scale Sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the Network Performance Monitor solution for non-Azure endpoints will be auto-enabled after the creation of Connection Monitor begins.
- If the selected virtual machine scale set is set for manual upgrade, you'll have to upgrade the scale set after the Network Watcher extension installation. Doing so lets you continue setting up the Connection Monitor with virtual machine scale sets as endpoints. If the virtual machine scale set is set to auto-upgrade, you don't need to worry about upgrading after the installation of the Network Watcher extension.
+ If the selected Virtual Machine Scale Set is set for manual upgrade, you'll have to upgrade the scale set after the Network Watcher extension installation. Doing so lets you continue setting up the Connection Monitor with Virtual Machine Scale Sets as endpoints. If the virtual machine scale set is set to auto-upgrade, you don't need to worry about upgrading after the installation of the Network Watcher extension.
- In the previously mentioned scenario, you can consent to an auto-upgrade of virtual machine scale sets with auto-enabling of the Network Watcher extension during the creation of Connection Monitor for virtual machine scale sets with manual upgrading. This approach eliminates the need to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
+ In the previously mentioned scenario, you can consent to an auto-upgrade of Virtual Machine Scale sets with auto-enabling of the Network Watcher extension during the creation of Connection Monitor for Virtual Machine Scale sets with manual upgrading. This approach eliminates the need to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
:::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test group and consent for an auto-upgrade of the virtual machine scale set in Connection Monitor.":::
network-watcher Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/data-residency.md
na Previously updated : 01/07/2021 Last updated : 06/16/2021 +
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
Title: Diagnose a VM network routing problem - Azure CLI
-description: In this article, you learn how use Azure CLI to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher.
+description: In this article, you learn how to use Azure CLI to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher.
documentationcenter: network-watcher
network-watcher Previously updated : 01/07/2021 Last updated : 03/18/2022 -+ # Diagnose a virtual machine network routing problem - Azure CLI
network-watcher Diagnose Vm Network Traffic Filtering Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-cli.md
Title: 'Quickstart: Diagnose a VM network traffic filter problem - Azure CLI'
+ Title: Quickstart to diagnose a VM network traffic filter problem - Azure CLI
description: Learn how to use Azure CLI to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher. documentationcenter: network-watcher tags: azure-resource-manager network-watcher Previously updated : 05/04/2022 Last updated : 11/02/2022 -+ #Customer intent: I need to diagnose a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM. # Quickstart: Diagnose a virtual machine network traffic filter problem - Azure CLI
-In this quickstart, you deploy a virtual machine (VM), and then check communications to an IP address and URL and from an IP address. You determine the cause of a communication failure and how you can resolve it.
+In this quickstart, you deploy a virtual machine (VM) and then check communications to and from an IP address and to a URL. You determine the cause of a communication failure and how to resolve it.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] -- This quickstart requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This quickstart requires version 2.0 or later of the Azure CLI. If you are using Azure Cloud Shell, the latest version is already installed.
- The Azure CLI commands in this quickstart are formatted to run in a Bash shell. ## Create a VM
-Before you can create a VM, you must create a resource group to contain the VM. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+1. Before you can create a VM, you must create a resource group to contain the VM. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroup* in the *eastus* location:
```azurecli-interactive az group create --name myResourceGroup --location eastus ```
-Create a VM with [az vm create](/cli/azure/vm). If SSH keys do not already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option. The following example creates a VM named *myVm*:
+2. Create a VM with [az vm create](/cli/azure/vm). If SSH keys don't already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option. The following example creates a VM named *myVm*:
```azurecli-interactive az vm create \
az vm create \
--generate-ssh-keys ```
-The VM takes a few minutes to create. Don't continue with remaining steps until the VM is created and the Azure CLI returns output.
+The VM takes a few minutes to create. Don't continue with the remaining steps until the VM is created and the Azure CLI returns the output.
## Test network communication
az network watcher configure \
### Use IP flow verify
-When you create a VM, Azure allows and denies network traffic to and from the VM, by default. You might later override Azure's defaults, allowing or denying additional types of traffic. To test whether traffic is allowed or denied to different destinations and from a source IP address, use the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command.
+When you create a VM, Azure allows and denies network traffic to and from the VM, by default. You might override Azure's defaults later, allowing or denying additional types of traffic. To test whether traffic is allowed or denied to different destinations and from a source IP address, use the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command.
Test outbound communication from the VM to one of the IP addresses for www.bing.com:
az network nic list-effective-nsg \
--name myVmVMNic ```
-The returned output includes the following text for the **AllowInternetOutbound** rule that allowed outbound access to www.bing.com in a previous step under [Use IP flow verify](#use-ip-flow-verify):
+The output includes the following text for the **AllowInternetOutbound** rule that allowed outbound access to www.bing.com in a previous step under [Use IP flow verify](#use-ip-flow-verify):
```console {
The returned output includes the following text for the **AllowInternetOutbound*
You can see in the previous output that **destinationAddressPrefix** is **Internet**. It's not clear how 13.107.21.200 relates to **Internet** though. You see several address prefixes listed under **expandedDestinationAddressPrefix**. One of the prefixes in the list is **12.0.0.0/6**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the previous output that override this rule. To deny outbound communication to an IP address, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
-When you ran the `az network watcher test-ip-flow` command to test outbound communication to 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DenyAllOutBound** rule denied the communication. The **DenyAllOutBound** rule equates to the **DenyAllOutBound** rule listed in the following output from the `az network nic list-effective-nsg` command:
+When you ran the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command to test outbound communication to 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DenyAllOutBound** rule denied the communication. The **DenyAllOutBound** rule equates to the **DenyAllOutBound** rule listed in the following output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command:
```console {
When you ran the `az network watcher test-ip-flow` command to test outbound comm
} ```
-The rule lists **0.0.0.0/0** as the **destinationAddressPrefix**. The rule denies the outbound communication to 172.131.0.100 because the address is not within the **destinationAddressPrefix** of any of the other outbound rules in the output from the `az network nic list-effective-nsg` command. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 at 172.131.0.100.
+The rule lists **0.0.0.0/0** as the **destinationAddressPrefix**. The rule denies the outbound communication to 172.131.0.100 because the address is not within the **destinationAddressPrefix** of any of the other outbound rules in the output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 at 172.131.0.100.
-When you ran the `az network watcher test-ip-flow` command in [Use IP flow verify](#use-ip-flow-verify) to test inbound communication from 172.131.0.100, the output informed you that the **DenyAllInBound** rule denied the communication. The **DenyAllInBound** rule equates to the **DenyAllInBound** rule listed in the following output from the `az network nic list-effective-nsg` command:
+When you ran the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command in [Use IP flow verify](#use-ip-flow-verify) to test inbound communication from 172.131.0.100, the output informed you that the **DenyAllInBound** rule denied the communication. The **DenyAllInBound** rule equates to the **DenyAllInBound** rule listed in the following output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command:
```console {
When you ran the `az network watcher test-ip-flow` command in [Use IP flow verif
}, ```
-The **DenyAllInBound** rule is applied because, as shown in the output, no other higher priority rule exists in the output from the `az network nic list-effective-nsg` command that allows port 80 inbound to the VM from 172.131.0.100. To allow the inbound communication, you could add a security rule with a higher priority that allows port 80 inbound from 172.131.0.100.
+The **DenyAllInBound** rule is applied because, as shown in the output, no other higher priority rule exists in the output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command that allows port 80 inbound to the VM from 172.131.0.100. To allow the inbound communication, you could add a security rule with a higher priority that allows port 80 inbound from 172.131.0.100.
The checks in this quickstart tested Azure configuration. If the checks return the expected results and you still have network problems, ensure that you don't have a firewall between your VM and the endpoint you're communicating with and that the operating system in your VM doesn't have a firewall that is allowing or denying communication.
az group delete --name myResourceGroup --yes
## Next steps
-In this quickstart, you created a VM and diagnosed inbound and outbound network traffic filters. You learned that network security group rules allow or deny traffic to and from a VM. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create security rules](../virtual-network/manage-network-security-group.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-security-rule).
-
-Even with the proper network traffic filters in place, communication to a VM can still fail, due to routing configuration. To learn how to diagnose VM network routing problems, see [Diagnose VM routing problems](diagnose-vm-network-routing-problem-cli.md) or, to diagnose outbound routing, latency, and traffic filtering problems, with one tool, see [Connection troubleshoot](network-watcher-connectivity-cli.md).
+- Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create security rules](../virtual-network/manage-network-security-group.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-security-rule).
+- Even with the proper network traffic filters in place, communication to a VM can still fail, due to routing configuration. To learn how to diagnose VM network routing problems, see [Diagnose VM routing problems](diagnose-vm-network-routing-problem-cli.md).
+- [Learn more](network-watcher-connectivity-cli.md) about Connection troubleshoot to diagnose outbound routing, latency, and traffic filtering problems.
network-watcher Enable Network Watcher Flow Log Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/enable-network-watcher-flow-log-settings.md
na Previously updated : 05/11/2022 Last updated : 05/30/2022 -+ # Enable Azure Network Watcher
network-watcher Migrate To Connection Monitor From Connection Monitor Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md
na Previously updated : 01/07/2021 Last updated : 06/30/2021 + #Customer intent: I need to migrate from Connection Monitor to Connection Monitor. # Migrate to Connection Monitor from Connection Monitor (Classic)
Below are some common errors faced during the migration :
| Error | Reason | |||
-|Following Connection monitors cannot be imported as one or more Subscription/Region combination don't have network watcher enabled. Enable network watcher and click refresh to import them. List of Connection monitor - {0} | This error occurs when User is migrating tests from CM(classic) to Connection Monitor and Network Watcher Extension is not enabled not enabled in one or more subscriptions and regions of CM (classic). User needs to enable NW Extension in the subscription and refresh to import them before migrating again |
+|Following Connection monitors cannot be imported as one or more Subscription/Region combination don't have network watcher enabled. Enable network watcher and click refresh to import them. List of Connection monitor - {0} | This error occurs when User is migrating tests from CM(classic) to Connection Monitor and Network Watcher Extension is not enabled in one or more subscriptions and regions of CM (classic). User needs to enable NW Extension in the subscription and refresh to import them before migrating again |
|Connection monitors having following tests cannot be imported as one or more azure virtual machines don't have network watcher extension installed. Install network watcher extension and click refresh to import them. List of tests - {0} | This error occurs when User is migrating tests from CM(classic) to Connection Monitor and Network Watcher Extension is not installed in one or more Azure VMs of CM (classic). User needs to install NW Extension in the Azure VM and refresh before migrating again | |No rows to display | This error occurs when User is trying to migrate subscriptions from CM (Classic) to CM but no CM (classic) is created in the subscriptions |
network-watcher Migrate To Connection Monitor From Network Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md
Previously updated : 01/07/2021 Last updated : 06/30/2021 + #Customer intent: I need to migrate from Network Performance Monitor to Connection Monitor. # Migrate to Connection Monitor from Network Performance Monitor > [!IMPORTANT]
-> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace with Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, migrate your tests from Network Performance Monitor to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+> Starting 1 July 2021, you'll not be able to add new tests in an existing workspace or enable a new workspace with Network Performance Monitor. You can continue to use the tests created prior to 1 July 2021. To minimize service disruption to your current workloads, migrate your tests from Network Performance Monitor to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
-You can migrate tests from Network Performance Monitor (NPM) to new, improved Connection Monitor with a single click and with zero downtime. To learn more about the benefits, see [Connection Monitor](./connection-monitor-overview.md).
+You can migrate tests from Network Performance Monitor to new, improved Connection Monitor with a single click and with zero downtime. To learn more about the benefits, see [Connection Monitor](./connection-monitor-overview.md).
## Key points to note
The migration helps produce the following results:
* Existing tests are mapped to Connection Monitor > Test Group > Test format. By selecting **Edit**, you can view and modify the properties of the new Connection Monitor, download a template to make changes to it, and submit the template via Azure Resource Manager. * Agents send data to both the Log Analytics workspace and the metrics. * Data monitoring:
- * **Data in Log Analytics**: Before migration, the data remains in the workspace in which NPM is configured in the NetworkMonitoring table. After the migration, the data goes to the NetworkMonitoring table, NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table in the same workspace. After the tests are disabled in NPM, the data is stored only in the NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table.
+ * **Data in Log Analytics**: Before migration, the data remains in the workspace in which Network Performance Monitor is configured in the NetworkMonitoring table. After the migration, the data goes to the NetworkMonitoring table, NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table in the same workspace. After the tests are disabled in Network Performance Monitor, the data is stored only in the NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table.
* **Log-based alerts, dashboards, and integrations**: You must manually edit the queries based on the new NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table. To re-create the alerts in metrics, see [Network connectivity monitoring with Connection Monitor](./connection-monitor-overview.md#metrics-in-azure-monitor). * For ExpressRoute Monitoring:
- * **End to end loss and latency**: Connection Monitor will power this, and it will easier than NPM as users do not need to configure which circuits and peerings to monitor. Circuits in the path will automatically be discovered , data will be available in metrics (faster than LA which was where NPM stored the results). Topology will work as is as well.
- * **Bandwidth measurements**: With the launch of bandwidth related metrics, NPMΓÇÖs log analytics based approach was not effective in bandwidth monitoring for ExpressRoute customers. This capability is now not available in Connection Monitor.
+ * **End to end loss and latency**: Connection Monitor will power this, and it will be easier than Network Performance Monitor, as users don't need to configure which circuits and peerings to monitor. Circuits in the path will automatically be discovered, data will be available in metrics (faster than LA, which was where Network Performance Monitor stored the results). Topology will work as is as well.
+ * **Bandwidth measurements**: With the launch of bandwidth related metrics, Network Performance MonitorΓÇÖs log analytics based approach wasn't effective in bandwidth monitoring for ExpressRoute customers. This capability is now not available in Connection Monitor.
## Prerequisites
-* Ensure that Network Watcher is enabled in your subscription and the region of the Log Analytics workspace. If not done, you will see an error stating "Before you attempt migrate, please enable Network watcher extension in selection subscription and location of LA workspace selected."
+* Ensure that Network Watcher is enabled in your subscription and the region of the Log Analytics workspace. If not done, you'll see an error stating "Before you attempt migrate, enable Network watcher extension in selection subscription and location of LA workspace selected."
* In case Azure VM belonging to a different region/subscription than that of Log Analytics workspace is used as an endpoint, make sure Network Watcher is enabled for that subscription and region. * Azure virtual machines with Log Analytics agents installed must be enabled with the Network Watcher extension.
To migrate the tests from Network Performance Monitor to Connection Monitor, do
1. In Network Watcher, select **Connection Monitor**, and then select the **Import tests from NPM** tab.
- :::image type="content" source="./media/connection-monitor-2-preview/migrate-npm-to-cm-preview.png" alt-text="Migrate tests from Network Performance Monitor to Connection Monitor" lightbox="./media/connection-monitor-2-preview/migrate-npm-to-cm-preview.png":::
+ :::image type="content" source="./media/connection-monitor-2-preview/migrate-netpm-to-cm-preview.png" alt-text="Migrate tests from Network Performance Monitor to Connection Monitor" lightbox="./media/connection-monitor-2-preview/migrate-netpm-to-cm-preview.png":::
-1. In the drop-down lists, select your subscription and workspace, and then select the NPM feature you want to migrate.
+1. In the drop-down lists, select your subscription and workspace, and then select the Network Performance Monitor feature you want to migrate.
1. Select **Import** to migrate the tests.
-* If NPM is not enabled on the workspace, you will see an error stating "No valid NPM config found".
-* If no tests exist in the feature you chose in step2 , you will see an error stating "Workspace selected does not have \<feature\> config".
-* If there are no valid tests, you will see an error stating "Workspace selected does not have valid tests"
-* Your tests may contain agents that are no longer active, but may have been active in the past. You will see an error stating "Few tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running any more. Enable agents and migrate to Connection Monitor. Click continue to migrate the tests that do not contain agents that are not active."
+* If Network Performance Monitor isn't enabled on the workspace, you'll see an error stating "No valid NPM config found".
+* If no tests exist in the feature you chose in step2, you'll see an error stating "Workspace selected doesn't have \<feature\> config".
+* If there are no valid tests, you'll see an error stating "Workspace selected does not have valid tests"
+* Your tests may contain agents that are no longer active, but may have been active in the past. You'll see an error stating "Few tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection Monitor. Select continue to migrate the tests that do not contain agents that are not active."
After the migration begins, the following changes take place: * A new connection monitor resource is created. * One connection monitor per region and subscription is created. For tests with on-premises agents, the new connection monitor name is formatted as `<workspaceName>_"workspace_region_name"`. For tests with Azure agents, the new connection monitor name is formatted as `<workspaceName>_<Azure_region_name>`.
- * Monitoring data is now stored in the same Log Analytics workspace in which NPM is enabled, in new tables called NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table.
+ * Monitoring data is now stored in the same Log Analytics workspace in which Network Performance Monitor is enabled, in new tables called NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table.
* The test name is carried forward as the test group name. The test description isn't migrated.
- * Source and destination endpoints are created and used in the new test group. For on-premises agents, the endpoints are formatted as `<workspaceName>_<FQDN of on-premises machine>`.The Agent description isn't migrated.
- * Destination port and probing interval are moved to a test configuration called `TC_<protocol>_<port>` and `TC_<protocol>_<port>_AppThresholds`. The protocol is set based on the port values. For ICMP, the test configurations are named as `TC_<protocol>` and `TC_<protocol>_AppThresholds`. Success thresholds and other optional properties if set are migrated, otherwise are left blank.
+ * Source and destination endpoints are created and used in the new test group. For on-premises agents, the endpoints are formatted as `<workspaceName>_<FQDN of on-premises machine>`. The Agent description isn't migrated.
+ * Destination port and probing interval are moved to a test configuration called `TC_<protocol>_<port>` and `TC_<protocol>_<port>_AppThresholds`. The protocol is set based on the port values. For ICMP, the test configurations are named as `TC_<protocol>` and `TC_<protocol>_AppThresholds`. Success thresholds and other optional properties if set, are migrated, otherwise are left blank.
* If the migrating tests contain agents that aren't running, you need to enable the agents and migrate again.
-* NPM isn't disabled, so the migrated tests can continue to send data to the NetworkMonitoring table, NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table. This approach ensures that existing log-based alerts and integrations are unaffected.
+* Network Performance Monitor isn't disabled, so the migrated tests can continue to send data to the NetworkMonitoring table, NWConnectionMonitorTestResult table and NWConnectionMonitorPathResult table. This approach ensures that existing log-based alerts and integrations are unaffected.
* The newly created connection monitor is visible in Connection Monitor. After the migration, be sure to:
-* Manually disable the tests in NPM. Until you do so, you'll continue to be charged for them.
-* While you're disabling NPM, re-create your alerts on the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables or use metrics.
+* Manually disable the tests in Network Performance Monitor. Until you do so, you'll continue to be charged for them.
+* While you're disabling Network Performance Monitor, re-create your alerts on the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables or use metrics.
* Migrate any external integrations to the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables. Examples of external integrations are dashboards in Power BI and Grafana, and integrations with Security Information and Event Management (SIEM) systems. ## Common Errors Encountered
-Below are some common errors faced during the migration :
+Below are some common errors faced during the migration:
| Error | Reason | |||
-| No valid NPM config found. Go to NPM UI to check config | This error occurs when User is selecting Import Tests from NPM to migrate the tests but NPM is not enabled in the workspace |
-|Workspace selected does not have 'Service Connectivity Monitor' config | This error occurs when User is migrating tests from NPMΓÇÖs Service Connectivity Monitor to Connection Monitor but there are no tests configured in Service Connectivity Monitor |
-|Workspace selected does not have 'ExpressRoute Monitor' config | This error occurs when User is migrating tests from NPMΓÇÖs ExpressRoute Monitor to Connection Monitor but there are no tests configured in ExpressRoute Monitor |
-|Workspace selected does not have 'Performance Monitor' config | This error occurs when User is migrating tests from NPMΓÇÖs Performance Monitor to Connection Monitor but there are no tests configured in Performance Monitor |
-|Workspace selected does not have valid '{0}' tests | This error occurs when User is migrating tests from NPM to Connection Monitor but there are no valid tests present in the feature chosen by User to migrate |
-|Before you attempt migrate, please enable Network watcher extension in selection subscription and location of LA workspace selected | This error occurs when User is migrating tests from NPM to Connection Monitor and Network Watcher Extension is not enabled in the LA workspace selected. User needs to enable NW Extension before migrating tests |
-|Few {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running any more. Enable agents and migrate to Connection Monitor. Click continue to migrate the tests that do not contain agents that are not active | This error occurs when User is migrating tests from NPM to Connection Monitor and some selected tests contain inactive Network Watcher Agents or such NW Agents which are no longer active but used to be active in the past and have been shut down. User can deselect these tests and continue to select and migrate the tests which do not contain any such inactive agents |
-|Your {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running any more. Enable agents and migrate to Connection Monitor | This error occurs when User is migrating tests from NPM to Connection Monitor and selected tests contain inactive Network Watcher Agents or such NW Agents which are no longer active but used to be active in the past and have been shut down. User needs to enable the agents and then continue to migrate these tests to Connection Monitor |
-|An error occurred while importing tests to connection monitor | This error occurs when User is trying to migrate tests from NPM to CM but due to errors the migration is not successful |
+| No valid NPM config found. Go to NPM UI to check config | This error occurs when User is selecting Import Tests from Network Performance Monitor to migrate the tests but Network Performance Monitor isn't enabled in the workspace. |
+|Workspace selected does not have 'Service Connectivity Monitor' config | This error occurs when User is migrating tests from Network Performance MonitorΓÇÖs Service Connectivity Monitor to Connection Monitor but there are no tests configured in Service Connectivity Monitor. |
+|Workspace selected does not have 'ExpressRoute Monitor' config | This error occurs when User is migrating tests from Network Performance MonitorΓÇÖs ExpressRoute Monitor to Connection Monitor but there are no tests configured in ExpressRoute Monitor. |
+|Workspace selected does not have 'Performance Monitor' config | This error occurs when User is migrating tests from Network Performance MonitorΓÇÖs Performance Monitor to Connection Monitor but there are no tests configured in Performance Monitor. |
+|Workspace selected does not have valid '{0}' tests | This error occurs when User is migrating tests from Network Performance Monitor to Connection Monitor but there are no valid tests present in the feature chosen by User to migrate. |
+|Before you attempt migrate, enable Network watcher extension in selection subscription and location of LA workspace selected | This error occurs when User is migrating tests from Network Performance Monitor to Connection Monitor and Network Watcher Extension isn't enabled in the LA workspace selected. User needs to enable NW Extension before migrating tests. |
+|Few {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection Monitor. Select continue to migrate the tests that do not contain agents that are not active. | This error occurs when User is migrating tests from Network Performance Monitor to Connection Monitor and some selected tests contain inactive Network Watcher Agents or such NW Agents, which are no longer active but used to be active in the past and have been shut down. User can deselect these tests and continue to select and migrate the tests, which don't contain any such inactive agents. |
+|Your {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running anymore. Enable agents and migrate to Connection Monitor | This error occurs when User is migrating tests from Network Performance Monitor to Connection Monitor and selected tests contain inactive Network Watcher Agents or such NW Agents, which are no longer active but used to be active in the past and have been shut down. User needs to enable the agents and then continue to migrate these tests to Connection Monitor. |
+|An error occurred while importing tests to connection monitor | This error occurs when the User is trying to migrate tests from Network Performance Monitor to CM but the migration isn't successful due to errors. |
network-watcher Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-overview.md
Previously updated : 11/25/2020 Last updated : 09/28/2022
The Resource view helps you visualize how a resource is configured. The Resource
![Sreenshot that shows Application Gateway view in Azure Monitor Network Insights.](media/network-insights-overview/application-gateway.png)
-The resource view for Application Gateway provides a simplified view of how the front-end IPs are connected to the listeners, rules, and backend pool. The connecting lines are color coded and provide additional details based on the backend pool health. The view also provides a detailed view of Application Gateway metrics and metrics for all related backend pools, like virtual machine scale set and VM instances.
+The resource view for Application Gateway provides a simplified view of how the front-end IPs are connected to the listeners, rules, and backend pool. The connecting lines are color coded and provide additional details based on the backend pool health. The view also provides a detailed view of Application Gateway metrics and metrics for all related backend pools, like Virtual Machine Scale Sets and VM instances.
[![Screenshot that shows dependency view in Azure Monitor Network Insights.](media/network-insights-overview/dependency-view.png)](media/network-insights-overview/dependency-view.png#lightbox)
network-watcher Network Insights Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-topology.md
Previously updated : 10/26/2022- Last updated : 11/16/2022+ # Topology (Preview)
network-watcher Network Insights Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-troubleshooting.md
Title: Azure Monitor Network Insights troubleshooting
description: Troubleshooting steps for issues that may arise while using Network insights -+ Previously updated : 09/09/2022 Last updated : 09/29/2022 # Troubleshooting
network-watcher Network Watcher Alert Triggered Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-alert-triggered-packet-capture.md
na Previously updated : 02/22/2017 Last updated : 01/20/2021 -+ # Use packet capture for proactive network monitoring with alerts and Azure Functions
network-watcher Network Watcher Analyze Nsg Flow Logs Graylog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-analyze-nsg-flow-logs-graylog.md
na Previously updated : 09/19/2017 Last updated : 07/03/2021 + # Manage and analyze network security group flow logs in Azure using Network Watcher and Graylog
network-watcher Network Watcher Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-create.md
na Previously updated : 02/22/2017 Last updated : 10/08/2021 -+ ms.devlang: azurecli
Network Watcher is a regional service that enables you to monitor and diagnose c
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Network Watcher is automatically enabled
-When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. There is no impact to your resources or associated charge for automatically enabling Network Watcher.
+When you create or update a virtual network in your subscription, Network Watcher will be enabled automatically in your Virtual Network's region. There's no impact to your resources or associated charge for automatically enabling Network Watcher.
#### Opt-out of Network Watcher automatic enablement If you would like to opt out of Network Watcher automatic enablement, you can do so by running the following commands: > [!WARNING]
-> Opting-out of Network Watcher automatic enablement is a permanent change. Once you opt-out you cannot opt-in without [contacting support](https://azure.microsoft.com/support/options/)
+> Opting-out of Network Watcher automatic enablement is a permanent change. Once you opt-out, you cannot opt-in without contacting [support](https://azure.microsoft.com/support/options/).
```azurepowershell-interactive Register-AzProviderFeature -FeatureName DisableNetworkWatcherAutocreation -ProviderNamespace Microsoft.Network
az provider register -n Microsoft.Network
## Create a Network Watcher in the portal
-Navigate to **All Services** > **Networking** > **Network Watcher**. You can select all the subscriptions you want to enable Network Watcher for. This action creates a Network Watcher in every region that is available.
+1. Log into the [Azure portal](https://portal.azure.com) with an account that has the necessary permissions.
+2. Select **More services**.
+3. In the **All services** screen, enter **Network Watcher** in the **Filter services** search box and select it from the search result.
+You can select all the subscriptions you want to enable Network Watcher for. This action creates a Network Watcher in every region that is available.
![create a network watcher](./media/network-watcher-create/figure1.png) When you enable Network Watcher using the portal, the name of the Network Watcher instance is automatically set to *NetworkWatcher_region_name* where *region_name* corresponds to the Azure region where the instance is enabled. For example, a Network Watcher enabled in the West Central US region is named *NetworkWatcher_westcentralus*.
-The Network Watcher instance is automatically created in a resource group named *NetworkWatcherRG*. The resource group is created if it does not already exist.
+The Network Watcher instance is automatically created in a resource group named *NetworkWatcherRG*. The resource group is created if it doesn't already exist.
If you wish to customize the name of a Network Watcher instance and the resource group it's placed into, you can use PowerShell, the Azure CLI, the REST API, or ARMClient methods described in the sections that follow. In each option, the resource group must exist before you create a Network Watcher in it.
armclient put "https://management.azure.com/subscriptions/${subscriptionId}/reso
## Create a Network Watcher using Azure Quickstart Template
-To create an instance of Network Watcher refer this [Quickstart Template](https://azure.microsoft.com/resources/templates/networkwatcher-create/)
+To create an instance of Network Watcher, refer this [Quickstart Template](https://azure.microsoft.com/resources/templates/networkwatcher-create/)
## Delete a Network Watcher in the portal
-Navigate to **All Services** > **Networking** > **Network Watcher**.
-
-Select the overview tab, if you're not already there. Use the dropdown to select the subscription you want to disable network watcher in.
-Expand the list of regions for your chosen subscription by clicking on the arrow. For any given, use the 3 dots on the right to access the context menu.
-Click on "Disable network watcher" to start disabling. You will be asked to confirm this step. Click Yes to continue.
-On the portal, you will have to do this individually for every region in every subscription.
+1. Navigate to **All Services** > **Networking** > **Network Watcher**.
+2. Select the overview tab, if you're not already there. Use the dropdown to select the subscription you want to disable network watcher in.
+3. Expand the list of locations for your chosen subscription by selecting on the arrow. For any given, use the 3 dots on the right to access the context menu.
+4. Select **Disable network watcher** to start disabling. You'll be asked to confirm this step. Select **Yes** to continue.
+On the portal, you'll have to do this individually for every region in every subscription.
## Delete a Network Watcher with PowerShell
network-watcher Network Watcher Diagnose On Premises Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-diagnose-on-premises-connectivity.md
Title: Diagnose On-Premises connectivity via VPN gateway
+ Title: Diagnose on-premises connectivity via VPN gateway
description: This article describes how to diagnose on-premises connectivity via VPN gateway with Azure Network Watcher resource troubleshooting.
na Previously updated : 01/07/2021 Last updated : 01/20/2021 + # Diagnose on-premises connectivity via VPN gateways
-Azure VPN Gateway enables you to create hybrid solution that address the need for a secure connection between your on-premises network and your Azure virtual network. As your requirements are unique, so is the choice of on-premises VPN device. Azure currently supports [several VPN devices](../vpn-gateway/vpn-gateway-about-vpn-devices.md#devicetable) that are constantly validated in partnership with the device vendors. Review the device-specific configuration settings before configuring your on-premises VPN device. Similarly, Azure VPN Gateway is configured with a set of [supported IPsec parameters](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) that are used for establishing connections. Currently there is no way for you to specify or select a specific combination of IPsec parameters from the Azure VPN Gateway. For establishing a successful connection between on-premises and Azure, the on-premises VPN device settings must be in accordance with the IPsec parameters prescribed by Azure VPN Gateway. If the settings are incorrect, there is a loss of connectivity and until now troubleshooting these issues was not trivial and usually took hours to identify and fix the issue.
+Azure VPN Gateway enables you to create hybrid solution that address the need for a secure connection between your on-premises network and your Azure virtual network. As your requirements are unique, so is the choice of on-premises VPN device. Azure currently supports [several VPN devices](../vpn-gateway/vpn-gateway-about-vpn-devices.md#devicetable) that are constantly validated in partnership with the device vendors. Review the device-specific configuration settings before configuring your on-premises VPN device. Similarly, Azure VPN Gateway is configured with a set of [supported IPsec parameters](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) that are used for establishing connections. Currently, there's no way for you to specify or select a specific combination of IPsec parameters from the Azure VPN Gateway. For establishing a successful connection between on-premises and Azure, the on-premises VPN device settings must be in accordance with the IPsec parameters prescribed by Azure VPN Gateway. If the settings are incorrect, there's a loss of connectivity and until now troubleshooting these issues wasn't trivial and usually took hours to identify and fix the issue.
-With the Azure Network Watcher troubleshoot feature, you are able to diagnose any issues with your Gateway and Connections and within minutes have enough information to make an informed decision to rectify the issue.
+With the Azure Network Watcher troubleshoot feature, you're able to diagnose any issues with your Gateway and Connections and within minutes have enough information to make an informed decision to rectify the issue.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
You want to configure a site-to-site connection between Azure and on-premises us
Detailed step by step guidance for configuring a Site-to-Site configuration can be found by visiting: [Create a VNet with a Site-to-Site connection using the Azure portal](../vpn-gateway/tutorial-site-to-site-portal.md).
-One of the critical configuration steps is configuring the IPsec communication parameters, any misconfiguration leads to loss of connectivity between the on-premises network and Azure. Currently Azure VPN Gateways are configured to support the following IPsec parameters for Phase 1. As you can see in the table below, the encryption algorithms supported by Azure VPN Gateway are AES256, AES128, and 3DES.
+One of the critical configuration steps is configuring the IPsec communication parameters, any misconfiguration leads to loss of connectivity between the on-premises network and Azure. Currently, Azure VPN Gateways are configured to support the following IPsec parameters for Phase 1. As you can see in the table below, the encryption algorithms supported by Azure VPN Gateway are AES256, AES128, and 3DES.
### IKE phase 1 setup
One of the critical configuration steps is configuring the IPsec communication p
| Hashing Algorithm |SHA1(SHA128) |SHA1(SHA128), SHA2(SHA256) | | Phase 1 Security Association (SA) Lifetime (Time) |28,800 seconds |28,800 seconds |
-As a user, you would be required to configure your FortiGate, a sample configuration can be found on [GitHub](https://github.com/Azure/Azure-vpn-config-samples/blob/master/Fortinet/Current/fortigate_show%20full-configuration.txt). Unknowingly you configured your FortiGate to use SHA-512 as the hashing algorithm. As this algorithm is not a supported algorithm for policy-based connections, your VPN connection does work.
+As a user, you would be required to configure your FortiGate, a sample configuration can be found on [GitHub](https://github.com/Azure/Azure-vpn-config-samples/blob/master/Fortinet/Current/fortigate_show%20full-configuration.txt). Unknowingly you configured your FortiGate to use SHA-512 as the hashing algorithm. As this algorithm isn't a supported algorithm for policy-based connections, your VPN connection does work.
These issues are hard to troubleshoot and root causes are often non-intuitive. In this case, you can open a support ticket to get help on resolving the issue. But with Azure Network Watcher troubleshoot API, you can identify these issues on your own.
You can get detailed information from the Scrubbed-wfpdiag.txt about the error,
Another common misconfiguration is the specifying incorrect shared keys. If in the preceding example you had specified different shared keys, the IKEErrors.txt shows the following error: `Error: Authentication failed. Check shared key`.
-Azure Network Watcher troubleshoot feature enables you to diagnose and troubleshoot your VPN Gateway and Connection with the ease of a simple PowerShell cmdlet. Currently we support diagnosing the following conditions and are working towards adding more condition.
+Azure Network Watcher troubleshoot feature enables you to diagnose and troubleshoot your VPN Gateway and Connection with the ease of a simple PowerShell cmdlet. Currently, we support diagnosing the following conditions and are working towards adding more condition.
### Gateway | Fault Type | Reason | Log| ||||
-| NoFault | When no error is detected. |Yes|
+| NoFault | No error is detected |Yes|
| GatewayNotFound | Cannot find Gateway or Gateway is not provisioned. |No| | PlannedMaintenance | Gateway instance is under maintenance. |No|
-| UserDrivenUpdate | When a user update is in progress. This could be a resize operation. | No |
+| UserDrivenUpdate | A user update is in progress. This could be a resize operation. | No |
| VipUnResponsive | Cannot reach the primary instance of the Gateway. This happens when the health probe fails. | No | | PlatformInActive | There is an issue with the platform. | No| | ServiceNotRunning | The underlying service is not running. | No| | NoConnectionsFoundForGateway | No Connections exists on the gateway. This is only a warning.| No|
-| ConnectionsNotConnected | None of the Connections are connected. This is only a warning.| Yes|
+| ConnectionsNotConnected | None of the Connections is connected. This is only a warning.| Yes|
| GatewayCPUUsageExceeded | The current Gateway usage CPU usage is > 95%. | Yes | ### Connection | Fault Type | Reason | Log| ||||
-| NoFault | When no error is detected. |Yes|
+| NoFault | No error is detected. |Yes|
| GatewayNotFound | Cannot find Gateway or Gateway is not provisioned. |No| | PlannedMaintenance | Gateway instance is under maintenance. |No|
-| UserDrivenUpdate | When a user update is in progress. This could be a resize operation. | No |
+| UserDrivenUpdate | A user update is in progress. This could be a resize operation. | No |
| VipUnResponsive | Cannot reach the primary instance of the Gateway. It happens when the health probe fails. | No | | ConnectionEntityNotFound | Connection configuration is missing. | No | | ConnectionIsMarkedDisconnected | The Connection is marked "disconnected." |No|
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
na Previously updated : 01/07/2021 Last updated : 09/15/2022 + # Perform network intrusion detection with Network Watcher and open source tools
network-watcher Network Watcher Monitor With Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitor-with-azure-automation.md
Title: Troubleshoot and monitor VPN gateways - Azure Automation
-description: This article describes how diagnose On-premises connectivity with Azure Automation and Network Watcher
+description: This article describes how to diagnose On-premises connectivity with Azure Automation and Network Watcher
documentationcenter: na
na Previously updated : 02/22/2017 Last updated : 11/20/2020+
network-watcher Network Watcher Network Configuration Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-network-configuration-diagnostics-overview.md
na Previously updated : 09/15/2020 Last updated : 03/18/2022+
network-watcher Network Watcher Next Hop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-next-hop-overview.md
na Previously updated : 02/22/2017 Last updated : 01/29/2020 + # Use next hop to diagnose virtual machine routing problems
network-watcher Network Watcher Nsg Auditing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-auditing-powershell.md
na Previously updated : 02/22/2017 Last updated : 03/01/2022 -+
network-watcher Network Watcher Nsg Flow Logging Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-azure-resource-manager.md
na Previously updated : 01/07/2021 Last updated : 02/09/2022 -+
network-watcher Network Watcher Nsg Flow Logging Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
Previously updated : 01/07/2021 Last updated : 12/09/2021 +
https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecurity
## Next Steps
-Learn how to [Visualize your NSG flow logs with PowerBI](network-watcher-visualize-nsg-flow-logs-power-bi.md)
+Learn how to [Visualize your NSG flow logs with Power BI](network-watcher-visualize-nsg-flow-logs-power-bi.md)
Learn how to [Visualize your NSG flow logs with open source tools](network-watcher-visualize-nsg-flow-logs-open-source-tools.md)
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
na Previously updated : 01/04/2021 Last updated : 10/06/2022+
network-watcher Network Watcher Nsg Flow Logging Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md
Previously updated : 01/07/2021 Last updated : 12/24/2021 -+ # Configuring Network Security Group Flow logs with Azure PowerShell
network-watcher Network Watcher Nsg Flow Logging Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-rest.md
na Previously updated : 01/07/2021 Last updated : 07/13/2021 +
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-grafana.md
na Previously updated : 09/15/2017 Last updated : 09/15/2022 -+ # Manage and analyze Network Security Group flow logs using Network Watcher and Grafana
network-watcher Network Watcher Packet Capture Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-cli.md
na Previously updated : 01/07/2021 Last updated : 12/09/2021 + # Manage packet captures with Azure Network Watcher using the Azure CLI
network-watcher Network Watcher Packet Capture Manage Portal Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal-vmss.md
na Previously updated : 01/07/2021 Last updated : 06/07/2022+
network-watcher Network Watcher Packet Capture Manage Powershell Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell-vmss.md
na Previously updated : 01/07/2021 Last updated : 06/07/2022 -+ # Manage packet captures in Virtual machine scale set with Azure Network Watcher using PowerShell
network-watcher Network Watcher Packet Capture Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell.md
na Previously updated : 01/07/2021 Last updated : 02/01/2021 -+ # Manage packet captures with Azure Network Watcher using PowerShell
network-watcher Network Watcher Packet Capture Manage Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest-vmss.md
na Previously updated : 01/07/2021 Last updated : 10/04/2022 -+
network-watcher Network Watcher Packet Capture Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest.md
na Previously updated : 01/07/2021 Last updated : 05/28/2021 -+
network-watcher Network Watcher Packet Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-overview.md
na Previously updated : 02/22/2017 Last updated : 06/07/2022 -+ # Introduction to variable packet capture in Azure Network Watcher
network-watcher Network Watcher Read Nsg Flow Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-read-nsg-flow-logs.md
na Previously updated : 01/04/2021 Last updated : 02/09/2021 -+ # Read NSG flow logs
network-watcher Network Watcher Security Group View Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-cli.md
na Previously updated : 02/22/2017 Last updated : 12/09/2021 + # Analyze your Virtual Machine security with Security Group View using Azure CLI
network-watcher Network Watcher Security Group View Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-overview.md
na Previously updated : 04/26/2017 Last updated : 03/18/2022 + # Introduction to Effective security rules view in Azure Network Watcher
network-watcher Network Watcher Security Group View Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-powershell.md
na Previously updated : 02/22/2017 Last updated : 11/20/2020 -+ # Analyze your Virtual Machine security with Security Group View using PowerShell
network-watcher Network Watcher Security Group View Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-rest.md
na Previously updated : 02/22/2017 Last updated : 03/01/2022 -+ # Analyze your Virtual Machine security with Security Group View using REST API
network-watcher Network Watcher Troubleshoot Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-cli.md
na Previously updated : 01/07/2021 Last updated : 07/25/2022 +
network-watcher Network Watcher Troubleshoot Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-powershell.md
Title: Troubleshoot Azure VNet gateway and connections - Azure PowerShell
description: This page explains how to use the Azure Network Watcher troubleshoot PowerShell cmdlet -+ Previously updated : 01/07/2021-- Last updated : 11/22/2022++ # Troubleshoot Virtual Network Gateway and Connections using Azure Network Watcher PowerShell
> - [Azure CLI](network-watcher-troubleshoot-manage-cli.md) > - [REST API](network-watcher-troubleshoot-manage-rest.md)
-Network Watcher provides many capabilities as it relates to understanding your network resources in Azure. One of these capabilities is resource troubleshooting. Resource troubleshooting can be called through the portal, PowerShell, CLI, or REST API. When called, Network Watcher inspects the health of a Virtual Network Gateway or a Connection and returns its findings.
+Network Watcher provides various capabilities as it relates to understanding your network resources in Azure. One of these capabilities is resource troubleshooting. Resource troubleshooting can be called through the Azure portal, PowerShell, CLI, or REST API. When called, Network Watcher inspects the health of a Virtual Network Gateway or a Connection and returns its findings.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-## Before you begin
+## Prerequisites
-This scenario assumes you have already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher.
-
-For a list of supported gateway types visit, [Supported Gateway types](network-watcher-troubleshoot-overview.md#supported-gateway-types).
+- A [Network Watcher instance](network-watcher-create.md).
+- Ensure you're using a supported Gateway type. [Learn more](network-watcher-troubleshoot-overview.md#supported-gateway-types).
## Overview
-Resource troubleshooting provides the ability troubleshoot issues that arise with Virtual Network Gateways and Connections. When a request is made to resource troubleshooting, logs are being queried and inspected. When inspection is complete, the results are returned. Resource troubleshooting requests are long running requests, which could take multiple minutes to return a result. The logs from troubleshooting are stored in a container on a storage account that is specified.
+Resource troubleshooting provides the ability to troubleshoot issues that arise with Virtual Network Gateways and Connections. When a request is made to resource troubleshooting, logs are being queried and inspected. When inspection is complete, the results are returned. Resource troubleshooting requests are long running requests, which could take multiple minutes to return a result. The logs from troubleshooting are stored in a container on a storage account that is specified.
## Retrieve Network Watcher
$sc = New-AzStorageContainer -Name logs
## Run Network Watcher resource troubleshooting
-You troubleshoot resources with the `Start-AzNetworkWatcherResourceTroubleshooting` cmdlet. We pass the cmdlet the Network Watcher object, the Id of the Connection or Virtual Network Gateway, the storage account id, and the path to store the results.
+You can troubleshoot resources with the [Start-AzNetworkWatcherResourceTroubleshooting](/powershell/module/az.network/start-aznetworkwatcherresourcetroubleshooting) cmdlet. We pass the cmdlet the Network Watcher object, the ID of the Connection or Virtual Network Gateway, the storage account ID, and the path to store the results.
> [!NOTE]
-> The `Start-AzNetworkWatcherResourceTroubleshooting` cmdlet is long running and may take a few minutes to complete.
+> The [Start-AzNetworkWatcherResourceTroubleshooting](/powershell/module/az.network/start-aznetworkwatcherresourcetroubleshooting) cmdlet is long running and may take a few minutes to complete.
```powershell Start-AzNetworkWatcherResourceTroubleshooting -NetworkWatcher $networkWatcher -TargetResourceId $connection.Id -StorageId $sa.Id -StoragePath "$($sa.PrimaryEndpoints.Blob)$($sc.name)" ```
-Once you run the cmdlet, Network Watcher reviews the resource to verify the health. It returns the results to the shell and stores logs of the results in the storage account specified.
+Once you run the cmdlet, Network Watcher reviews the resource to verify its health. It returns the results to the shell and stores logs of the results in the storage account specified.
## Understanding the results
-The action text provides general guidance on how to resolve the issue. If an action can be taken for the issue, a link is provided with additional guidance. In the case where there is no additional guidance, the response provides the url to open a support case. For more information about the properties of the response and what is included, visit [Network Watcher Troubleshoot overview](network-watcher-troubleshoot-overview.md)
+The action text provides general guidance on how to resolve the issue.
+
+- If an action can be taken for the issue, a link is provided with additional guidance.
+- If there's no guidance provided, the response provides the URL to open a support case.
+
+For more information about the properties of the response and what is included, see [Network Watcher Troubleshoot overview](network-watcher-troubleshoot-overview.md).
-For instructions on downloading files from azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. More information about Storage Explorer can be found here at the following link: [Storage Explorer](https://storageexplorer.com/)
+For instructions on downloading files from Azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. For more information, see [Storage Explorer](https://storageexplorer.com/).
## Next steps
-If settings have been changed that stop VPN connectivity, see [Manage Network Security Groups](../virtual-network/manage-network-security-group.md) to track down the network security group and security rules that may be in question.
+If VPN connectivity has been stopped due to a change in settings, see [Manage Network Security Groups](../virtual-network/manage-network-security-group.md) to track down the network security group and security rules that may be in question.
network-watcher Network Watcher Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-overview.md
na Previously updated : 06/19/2017 Last updated : 03/31/2022 + # Introduction to resource troubleshooting in Azure Network Watcher
network-watcher Network Watcher Using Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-using-open-source-tools.md
na Previously updated : 02/22/2017 Last updated : 02/25/2021+
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
na Previously updated : 02/22/2017 Last updated : 09/15/2022 + # Visualize Azure Network Watcher NSG flow logs using open source tools
network-watcher Network Watcher Visualize Nsg Flow Logs Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-power-bi.md
na Previously updated : 01/07/2021 Last updated : 06/23/2021 + # Visualizing Network Security Group flow logs with Power BI
network-watcher Nsg Flow Logs Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-policy-portal.md
na Previously updated : 01/07/2021 Last updated : 02/09/2022 -+ # QuickStart: Deploy and manage NSG Flow Logs using Azure Policy
network-watcher Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/resource-move.md
na Previously updated : 01/07/2021 Last updated : 06/10/2021 -+
network-watcher Supported Region Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/supported-region-traffic-analytics.md
na Previously updated : 05/11/2022 Last updated : 06/15/2022 ms.custon: references_regions+ # Supported regions: NSG
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-policy-portal.md
Previously updated : 07/11/2021 Last updated : 02/09/2022 -+ # Deploy and manage Traffic Analytics using Azure Policy
network-watcher Traffic Analytics Schema Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema-update.md
documentationcenter: na - -+ na Previously updated : 06/13/2022 Last updated : 06/20/2022
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Previously updated : 01/07/2021 Last updated : 03/29/2022+
network-watcher Usage Scenarios Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/usage-scenarios-traffic-analytics.md
na Previously updated : 05/11/2022 Last updated : 05/30/2022 -+ # Usage scenarios
network-watcher View Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md
na-+ Previously updated : 05/09/2018 Last updated : 11/11/2022
network-watcher View Relative Latencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-relative-latencies.md
description: Learn how to view relative latencies across Internet providers to A
documentationcenter: '' - na Previously updated : 12/14/2017 Last updated : 04/20/2022 -+ # View relative latency to Azure regions from specific locations
purview Troubleshoot Policy Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-policy-distribution.md
Previously updated : 11/14/2022 Last updated : 11/21/2022 # Tutorial: Troubleshoot distribution of Microsoft Purview access policies (preview)
This guide uses examples from SQL Server as data sources.
* Register a data source, enable *Data use management*, and create a policy. To do so, use one of the Microsoft Purview policy guides. To follow along with the examples in this tutorial, you can [create a DevOps policy for Azure SQL Database](how-to-policies-devops-azure-sql-db.md). * Establish a bearer token and call data plane APIs. To learn how, see [how to call REST APIs for Microsoft Purview data planes](tutorial-using-rest-apis.md). To be authorized to fetch policies, you need to be a Policy Author, Data Source Admin, or Data Curator at the root-collection level in Microsoft Purview. To assign those roles, see [Manage Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
-## Overviewrelecloud-sql-srv1
+## Overview
You can fetch access policies from Microsoft Purview via either a *full pull* or a *delta pull*, as described in the following sections.
Full pull provides a complete set of policies for a particular data resource sco
To fetch policies for a data source via full pull, send a `GET` request to `/policyElements`, as follows: ```
-GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyelements?api-version={apiVersion}
+GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyelements?api-version={apiVersion}&$filter={filter}
``` where the path `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}` matches the resource ID for the data source.
+The last two parameters `api-version` and `$filter` are query parameters of type string.
+`$filter` is optional and can take the following values: `atScope` (the default, if parameter is not specified) or `childrenScope`. The first value is used to request all the policies that apply at the level of the path, including the ones that exist at higher scope as well as the ones that apply specifically to lower scope, that is, children data objects. The second means just return fine-grained policies that apply to the children data objects.
+ >[!Tip] > The resource ID can be found under the properties for the data source in the Azure portal.
where the path `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupNam
**Example request**: ```
-GET https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyElements?api-version=2021-01-01-preview
+GET https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyElements?api-version=2021-01-01-preview&$filter=atScope
``` **Example response**:
GET https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-
```json {
- "count": 7,
+ "count": 2,
"syncToken": "820:0", "elements": [ {
search Search Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-traffic-analytics.md
In the [portal](https://portal.azure.com) page for your Azure Cognitive Search s
Select an existing Application Insights resource or [create one](../azure-monitor/app/create-new-resource.md) if you don't have one already. If you use the Search Traffic Analytics page, you can copy the instrumentation key your application needs to connect to Application Insights.
-Once you have an Application Insights resource, you can follow [instructions for supported languages and platforms](../azure-monitor/app/platforms.md) to register your app. Registration is simply adding the instrumentation key from Application Insights to your code, which sets up the association. You can find the key in the portal, or from the Search Traffic Analytics page when you select an existing resource.
+Once you have an Application Insights resource, you can follow [instructions for supported languages and platforms](../azure-monitor/app/app-insights-overview.md#supported-languages) to register your app. Registration is simply adding the instrumentation key from Application Insights to your code, which sets up the association. You can find the key in the portal, or from the Search Traffic Analytics page when you select an existing resource.
A shortcut that works for some Visual Studio project types is reflected in the following steps. It creates a resource and registers your app in just a few clicks.
This step is where you instrument your own search application, using the Applica
### Step 1: Create a telemetry client
-Create an object that sends events to Application Insights. You can add instrumentation to your server-side application code or client-side code running in a browser, expressed here as C# and JavaScript variants (for other languages, see the complete list of [supported platforms and frameworks](../azure-monitor/app/platforms.md). Choose the approach that gives you the desired depth of information.
+Create an object that sends events to Application Insights. You can add instrumentation to your server-side application code or client-side code running in a browser, expressed here as C# and JavaScript variants (for other languages, see the complete list of [supported platforms and frameworks](../azure-monitor/app/app-insights-overview.md#supported-languages). Choose the approach that gives you the desired depth of information.
Server-side telemetry captures metrics at the application layer, for example in applications running as a web service in the cloud, or as an on-premises app on a corporate network. Server-side telemetry captures search and click events, the position of a document in results, and query information, but your data collection will be scoped to whatever information is available at that layer.
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
The AMA is installed on a Linux machine that acts as a log forwarder, and the AM
> The CEF via AMA connector is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > [!NOTE]
-> On February 28th 2023, we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). This means that custom queries will require being reviews and updates. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
+> On February 28th 2023, we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). This means that custom queries will require being reviewed and updated. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
## Overview
This example collects events for:
In this article, you learned how to set up the Windows CEF via AMA connector to upload data from appliances that support CEF over Syslog. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Logstash Data Connection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash-data-connection-rules.md
The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to y
- Install a supported version of Logstash. The plugin supports: - Logstash version 7.0 to 7.17.6.
- - Logstash version 8.0 to 8.4.2.
+ - Logstash version 8.0 to 8.5.1.
> [!NOTE] > If you use Logstash 8, we recommended that you [disable ECS in the pipeline](https://www.elastic.co/guide/en/logstash/8.4/ecs-ls.html).
The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to y
The Microsoft Sentinel output plugin is available in the Logstash collection. -- Follow the instructions in the Logstash [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html) document to install the **[microsoft-logstash-output-azure-loganalytics](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-logstash-output-azure-loganalytics)** plugin.
+- Follow the instructions in the Logstash [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html) document to install the **[microsoft-logstash-output-azure-loganalytics](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-sentinel-logstash-output-plugin)** plugin.
- If your Logstash system does not have Internet access, follow the instructions in the Logstash [Offline Plugin Management](https://www.elastic.co/guide/en/logstash/current/offline-plugins.html) document to prepare and use an offline plugin pack. (This will require you to build another Logstash system with Internet access.) ### Create a sample file
sentinel Connect Logstash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash.md
The Logstash engine is comprised of three components:
> > - Microsoft does not support third-party Logstash output plugins for Microsoft Sentinel, or any other Logstash plugin or component of any type. >
-> - Microsoft Sentinel's Logstash output plugin supports only **Logstash versions from 7.0 to 7.16**.
+> - Microsoft Sentinel's Logstash output plugin supports only **Logstash versions 7.0 to 7.17.6, and versions 8.0 to 8.5.1**.
The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to your Log Analytics workspace, using the Log Analytics HTTP Data Collector REST API. The data is ingested into custom logs.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
For more information, see the Cognito Detect Syslog Guide, which can be download
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-workspace-transformations-portal.md) | | **Supported by** | Microsoft |
-> [!NOTE]
-> This connector was designed to import only those alerts whose status is "open." Alerts that have been closed in Azure AD Identity Protection will not be imported to Microsoft Sentinel.
- ## Azure Activity | Connector attribute | Description |
sentinel Use Matching Analytics To Detect Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-matching-analytics-to-detect-threats.md
Matching analytics is configured when you enable the **Microsoft Threat Intellig
:::image type="content" source="media/use-matching-analytics-to-detect-threats/configure-matching-analytics-rule.png" alt-text="A screenshot showing the Microsoft Threat Intelligence Analytics rule enabled in the Active rules tab.":::
-Alerts are grouped on a per-observable basis. For example, all alerts generated in a 24-hour time period that match the `contoso.com` domain are grouped into a single incident with the appropriate severity.
- ## Data sources and indicators
Use the following steps to triage through the incidents generated by the **Micro
:::image type="content" source="media/work-with-threat-indicators/matching-analytics.png" alt-text="Screenshot of incident generated by matching analytics with details pane.":::
-When a match is found, the indicator is also published to the Log Analytics **ThreatIntelligenceIndicators**, and displayed in the **Threat Intelligence** page. For any indicators published from this rule, the source is defined as **Microsoft Threat Intelligence Analytics**.
+1. Observe the severity assigned to the alerts and the incident. Depending on how the indicator is matched, an appropriate severity is assigned to an alert from `Informational` to `High`. For example, if the indicator is matched with firewall logs that have allowed the traffic, a high severity alert is generated. If the same indicator was matched with firewall logs that blocked the traffic, the alert generated would be low or medium.
+
+ Alerts are then grouped on a per-observable basis of the indicator. For example, all alerts generated in a 24-hour time period that match the `contoso.com` domain are grouped into a single incident with a severity assigned based on the highest alert severity.
+
+1. Observe the indicator details. When a match is found, the indicator is published to the Log Analytics **ThreatIntelligenceIndicators** table, and displayed in the **Threat Intelligence** page. For any indicators published from this rule, the source is defined as **Microsoft Threat Intelligence Analytics**.
For example, in the **ThreatIntelligenceIndicators** log:
Part of the Microsoft Threat Intelligence available through matching analytics i
:::image type="content" source="mediTI article.":::
-For more information, see the [MDTI portal](https://ti.defender.microsoft.com).
+For more information, see the [MDTI portal](https://ti.defender.microsoft.com) and [What is Microsoft Defender Threat Intelligence?](/../../defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti.md)
## Next steps
spatial-anchors Reliability Spatial Anchors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/concepts/reliability-spatial-anchors.md
+
+ Title: Resiliency in Azure Spatial Anchors #Required; Must be "Resiliency in *your official service name*"
+description: Find out about reliability in Azure Spatial Anchors #Required;
+++++ Last updated : 11/18/2022 #Required; mm/dd/yyyy format.
+#Customer intent: As a customer, I want to understand reliability support for Azure Spatial Anchors so that I can respond to and/or avoid failures in order to minimize downtime and data loss.
++
+# What is reliability in Azure Spatial Anchors?
+
+This article describes reliability support in Azure Spatial Anchors, and covers both regional resiliency with [availability zones](#availability-zones) and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](../../reliability/overview.md).
+
+## Azure Spatial Anchors
+
+[Azure Spatial Anchors](../overview.md) empowers developers with essential capabilities to build spatially aware
+mixed reality applications. It enables developers to work with mixed reality platforms to
+perceive spaces, designate precise points of interest, and to recall those points of interest from supported devices.
+These precise points of interest are referred to as Spatial Anchors.
+
+## Availability zones
+
+For more information about availability zones, see [Regions and availability zones](../../reliability/availability-zones-overview.md).
+
+Within a given region, all Azure Spatial Anchors accounts run as Active-Active. Failure of even an entire cluster within any given region isn't expected to impact overall service availability, provided the incoming load doesn't exceed the capacity of the remaining cluster.
+
+During an Azure regional outage, recovery of Azure Spatial Anchors account will rely on the Azure Paired Regions relationships for failover of resource dependencies, plus manual failover of resource dependencies, which is the responsibility of Microsoft and not customers.
+
+## Availability zone support
+
+SouthEastAsia region doesn't rely on Azure Paired Regions in order to be compliant with data privacy regulations. A failure of this entire region will impact overall service availability, since there's no other region to redirect traffic to.
+
+### Prerequisites
+
+For a list of regions that support availability zones, see [Azure regions with availability zones](../../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support). If your Azure Spatial Anchors account is located in one of the regions listed, you don't need to take any other action beyond provisioning the service.
+
+#### Create a resource with availability zone enabled
+
+To enable AZ support for Azure Spatial Anchors, you don't need to take further steps beyond provisioning the account. Just create the resource in the region with AZ support, and it will be available across all AZs.
+
+For detailed steps on how to provision the account, see [Create an Azure Spatial Anchors account](../how-tos/create-asa-account.md).
+
+### Fault tolerance
+
+During a zone-wide outage, the customer should expect brief degradation of performance, until the service self-healing rebalances underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state will compensate for a lost zone, leveraging capacity from other zones.
+
+## Disaster recovery: cross-region failover
+
+During an Azure regional outage, recovery of Azure Spatial Anchors account will rely on the Azure Paired Regions relationships for failover of resource dependencies, plus manual failover of resource dependencies, which is the responsibility of Microsoft and not customers.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Resiliency in Azure](../../reliability/overview.md)
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
# Quickstart: Azure Blob Storage client library for .NET
-Get started with the Azure Blob Storage client library for .NET. Azure Blob Storage is Microsoft's object storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob storage is optimized for storing massive amounts of unstructured data.
+Get started with the Azure Blob Storage client library for .NET. Azure Blob Storage is Microsoft's object storage solution for the cloud. Follow these steps to install the package and try out example code for basic tasks. Blob storage is optimized for storing massive amounts of unstructured data.
-The examples in this quickstart show you how to use the Azure Blob Storage client library for .NET to:
-
-* [Create the project and configure dependencies](#setting-up)
-* [Authenticate to Azure and authorize access to blob data](#authenticate-to-azure-and-authorize-access-to-blob-data)
-* [Create a container](#create-a-container)
-* [Upload a blob to a container](#upload-a-blob-to-a-container)
-* [List blobs in a container](#list-blobs-in-a-container)
-* [Download a blob](#download-a-blob)
-* [Delete a container](#delete-a-container)
-
-Additional resources:
--- [API reference documentation](/dotnet/api/azure.storage.blobs)-- [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs)-- [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs)-- [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples)
+[API reference documentation](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples)
## Prerequisites
In this quickstart, you learned how to upload, download, and list blobs using .N
To see Blob storage sample apps, continue to: > [!div class="nextstepaction"]
-> [Azure Blob Storage SDK .NET samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs/samples)
+> [Azure Blob Storage library for .NET samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs/samples)
-- For tutorials, samples, quick starts and other documentation, visit [Azure for .NET and .NET Core developers](/dotnet/azure/).-- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
+- To learn more, see the [Azure Blob Storage client libraries for .NET](/dotnet/api/overview/azure/storage).
+- For tutorials, samples, quick starts and other documentation, visit [Azure for .NET developers](/dotnet/azure/sdk/azure-sdk-for-dotnet).
+- To learn more about .NET, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
ms.devlang: golang
-# Quickstart: Upload, download, and list blobs using Go
+# Quickstart: Azure Blob Storage client library for Go
-In this quickstart, you learn how to use the Go programming language to upload, download, and list block blobs in a container in Azure Blob storage.
+Get started with the Azure Blob Storage client library for Go to manage blobs and containers. Follow these steps to install the package and try out example code for basic tasks.
+
+[API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) | [Package (pkg.go.dev)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob) | [Samples (GitHub)](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/main/services/storage)
## Prerequisites
See these other resources for Go development with Blob storage:
## Next steps
-In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For more information about the Azure Storage Blob SDK, view the [Source Code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) and [API Reference](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob).
+In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For more information about the Azure Storage Blob client library, view the [Source Code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) and [API Reference](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob).
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
ms.devlang: java
# Quickstart: Azure Blob Storage client library for Java
-Get started with the Azure Blob Storage client library for Java to manage blobs and containers. Follow steps to install the package and try out example code for basic tasks.
+Get started with the Azure Blob Storage client library for Java to manage blobs and containers. Follow these steps to install the package and try out example code for basic tasks.
[API reference documentation](/jav?toc=/azure/storage/blobs/toc.json#blob-samples)
In this quickstart, you learned how to upload, download, and list blobs using Ja
To see Blob storage sample apps, continue to: > [!div class="nextstepaction"]
-> [Azure Blob Storage SDK for Java samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-blob/src/samples/java/com/azure/storage/blob)
+> [Azure Blob Storage library for Java samples](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-blob/src/samples/java/com/azure/storage/blob)
-- To learn more, see the [Azure SDK for Java](https://github.com/Azure/azure-sdk-for-jav).-- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Java cloud developers](/azure/developer/java/).
+- To learn more, see the [Azure Blob Storage client libraries for Java](/java/api/overview/azure/storage-blob-readme).
+- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Java developers](/azure/developer/java/sdk/overview).
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
ms.devlang: javascript
-# Quickstart: Manage blobs with JavaScript SDK in Node.js
+# Quickstart: Azure Blob Storage client library for Node.js
-In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data.
+Get started with the Azure Blob Storage client library for Node.js to manage blobs and containers. Follow these steps to install the package and try out example code for basic tasks.
[API reference](/javascript/api/@azure/storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob) | [Package (npm)](https://www.npmjs.com/package/@azure/storage-blob) | [Samples](../common/storage-samples-javascript.md?toc=/azure/storage/blobs/toc.json#blob-samples)
Step through the code in your debugger and check your [Azure portal](https://por
In this quickstart, you learned how to upload, download, and list blobs using JavaScript.
-For tutorials, samples, quickstarts, and other documentation, visit:
+To see Blob storage sample apps, continue to:
> [!div class="nextstepaction"]
-> [Azure for JavaScript developer center](/azure/developer/javascript/)
+> [Azure Blob Storage library for JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples)
-- To learn how to deploy a web app that uses Azure Blob storage, see [Tutorial: Upload image data in the cloud with Azure Storage](./storage-upload-process-images.md?preserve-view=true&tabs=javascript)-- To see Blob storage sample apps, continue to [Azure Blob storage package library JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage/storage-blob/samples).-- To learn more, see the [Azure Blob storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/storage/storage-blob).
+- To learn more, see the [Azure Blob Storage client libraries for JavaScript](/javascript/api/overview/azure/storage-blob-readme).
+- For tutorials, samples, quickstarts, and other documentation, visit [Azure for JavaScript and Node.js developers](/azure/developer/javascript/).
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Title: 'Quickstart: Azure Blob Storage client library for Python'
+ Title: "Quickstart: Azure Blob Storage client library for Python"
description: In this quickstart, you learn how to use the Azure Blob Storage client library for Python to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
# Quickstart: Azure Blob Storage client library for Python
-Get started with the Azure Blob Storage client library for Python to manage blobs and containers. Follow steps to install the package and try out example code for basic tasks in an interactive console app.
+Get started with the Azure Blob Storage client library for Python to manage blobs and containers. Follow these steps to install the package and try out example code for basic tasks in an interactive console app.
[API reference documentation](/python/api/azure-storage-blob) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob) | [Package (PyPi)](https://pypi.org/project/azure-storage-blob/) | [Samples](../common/storage-samples-python.md?toc=/azure/storage/blobs/toc.json#blob-samples)
To see Blob storage sample apps, continue to:
> [!div class="nextstepaction"] > [Azure Blob Storage library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/storage/azure-storage-blob/samples) -- To learn more, see the [Azure Storage client libraries for Python](/azure/developer/python/sdk/azure-sdk-overview).-- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Python Developers](/azure/python/).
+- To learn more, see the [Azure Blob Storage client libraries for Python](/python/api/overview/azure/storage-blob-readme).
+- For tutorials, samples, quickstarts, and other documentation, visit [Azure for Python Developers](/azure/developer/python/sdk/azure-sdk-overview).
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
You could also assign permissions to all authenticated Azure AD users and specif
## Next steps
-Now that you've assigned share-level permissions, you must [configure directory and file-level permissions](storage-files-identity-ad-ds-configure-permissions.md).
+Now that you've assigned share-level permissions, you can [configure directory and file-level permissions](storage-files-identity-ad-ds-configure-permissions.md).
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
Previously updated : 11/09/2022 Last updated : 11/22/2022
Before you begin this article, make sure you've read [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md) to ensure that your share-level permissions are in place with Azure role-based access control (RBAC).
-After you assign share-level permissions, you must first connect to the Azure file share using the storage account key and then configure Windows access control lists (ACLs), also known as NTFS permissions, at the root, directory, or file level. While share-level permissions act as a high-level gatekeeper that determines whether a user can access the share, Windows ACLs operate at a more granular level to control what operations the user can do at the directory or file level.
+After you assign share-level permissions, you can configure Windows access control lists (ACLs), also known as NTFS permissions, at the root, directory, or file level. While share-level permissions act as a high-level gatekeeper that determines whether a user can access the share, Windows ACLs operate at a more granular level to control what operations the user can do at the directory or file level.
Both share-level and file/directory-level permissions are enforced when a user attempts to access a file/directory, so if there's a difference between either of them, only the most restrictive one will be applied. For example, if a user has read/write access at the file level, but only read at a share level, then they can only read that file. The same would be true if it was reversed: if a user had read/write access at the share-level, but only read at the file-level, they can still only read the file.
net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /use
## Configure Windows ACLs
-After you've connected to your Azure file share using the storage account key, you must configure the Windows ACLs. You can do this using either [icacls](#configure-windows-acls-with-icacls) or [Windows File Explorer](#configure-windows-acls-with-windows-file-explorer). You can also use the [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) PowerShell command.
+After you've connected to your Azure file share using the storage account key, you can configure the Windows ACLs. You can do this using either [icacls](#configure-windows-acls-with-icacls) or [Windows File Explorer](#configure-windows-acls-with-windows-file-explorer). You can also use the [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) PowerShell command.
If you have directories or files in on-premises file servers with Windows ACLs configured against the AD DS identities, you can copy them over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy or [Azure AzCopy v 10.4+](https://github.com/Azure/azure-storage-azcopy/releases). If your directories and files are tiered to Azure Files through Azure File Sync, your ACLs are carried over and persisted in their native format.
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
To set share-level permissions, follow the instructions in [Assign share-level p
## Configure directory and file-level permissions
-Once your share-level permissions are in place, you must assign directory/file-level permissions to the user or group. **This requires using a device with line-of-sight to an on-premises AD**. To use Windows File Explorer, the device also needs to be domain-joined.
+Once share-level permissions are in place, you can assign directory/file-level permissions to the user or group. **This requires using a device with line-of-sight to an on-premises AD**. To use Windows File Explorer, the device also needs to be domain-joined.
There are two options for configuring directory and file-level permissions with Azure AD Kerberos authentication:
stream-analytics App Insights Export Sql Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/app-insights-export-sql-stream-analytics.md
CREATE CLUSTERED INDEX [pvTblIdx] ON [dbo].[PageViewsTable]
![Screenshot of create PageViewsTable in SQL Server Management Studio.](./media/app-insights-export-sql-stream-analytics/34-create-table.png)
-In this sample, we are using data from page views. To see the other data available, inspect your JSON output, and see the [export data model](../azure-monitor/app/export-data-model.md).
+In this sample, we are using data from page views. To see the other data available, inspect your JSON output, and see the [export data model](../azure-monitor/app/export-telemetry.md#application-insights-export-data-model).
## Create an Azure Stream Analytics instance From the [Azure portal](https://portal.azure.com/), select the Azure Stream Analytics service, and create a new Stream Analytics job:
In this example:
* `webapplication27` is the name of the Application Insights resource, **all in lower case**. * `1234...` is the instrumentation key of the Application Insights resource **with dashes removed**.
-* `PageViews` is the type of data we want to analyze. The available types depend on the filter you set in Continuous Export. Examine the exported data to see the other available types, and see the [export data model](../azure-monitor/app/export-data-model.md).
+* `PageViews` is the type of data we want to analyze. The available types depend on the filter you set in Continuous Export. Examine the exported data to see the other available types, and see the [export data model](../azure-monitor/app/export-telemetry.md#application-insights-export-data-model).
* `/{date}/{time}` is a pattern written literally. To get the name and iKey of your Application Insights resource, open Essentials on its overview page, or open Settings.
Replace the default query with:
```
-Notice that the first few properties are specific to page view data. Exports of other telemetry types will have different properties. See the [detailed data model reference for the property types and values.](../azure-monitor/app/export-data-model.md)
+Notice that the first few properties are specific to page view data. Exports of other telemetry types will have different properties. See the [detailed data model reference for the property types and values.](../azure-monitor/app/export-telemetry.md#application-insights-export-data-model)
## Set up output to database Select SQL as the output.
FROM [dbo].[PageViewsTable]
``` ## Next steps
-* [detailed data model reference for the property types and values.](../azure-monitor/app/export-data-model.md)
+* [detailed data model reference for the property types and values.](../azure-monitor/app/export-telemetry.md#application-insights-export-data-model)
* [Continuous Export in Application Insights](../azure-monitor/app/export-telemetry.md) <!--Link references-->
stream-analytics App Insights Export Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/app-insights-export-stream-analytics.md
In this example:
* `webapplication27` is the name of the Application Insights resource **all lower case**. * `1234...` is the instrumentation key of the Application Insights resource, **omitting dashes**.
-* `PageViews` is the type of data you want to analyze. The available types depend on the filter you set in Continuous Export. Examine the exported data to see the other available types, and see the [export data model](../azure-monitor/app/export-data-model.md).
+* `PageViews` is the type of data you want to analyze. The available types depend on the filter you set in Continuous Export. Examine the exported data to see the other available types, and see the [export data model](../azure-monitor/app/export-telemetry.md#application-insights-export-data-model).
* `/{date}/{time}` is a pattern written literally. > [!NOTE]
Now you can use this dataset in reports and dashboards in [Power BI](https://pow
## Next steps * [Continuous export](../azure-monitor/app/export-telemetry.md)
-* [Detailed data model reference for the property types and values.](../azure-monitor/app/export-data-model.md)
+* [Detailed data model reference for the property types and values.](../azure-monitor/app/export-telemetry.md#application-insights-export-data-model)
* [Application Insights](../azure-monitor/app/app-insights-overview.md)
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
ORDER BY sr.request_id;
``` > [!NOTE]
-> Data Movement uses a hidden database called `QTABLE`. When that database is filled, the query will also return an error message about `tempdb` being out of space. Details about `QTABLE` are not returned in the above query.
+> Data Movement uses the `tempdb`. To reduce the usage of `tempdb` during data movement, ensure that your table is using a distribution strategy that [distributes data evenly](sql-data-warehouse-tables-distribute.md#choose-a-distribution-column-with-data-that-distributes-evenly).
+> Use [Azure Synapse SQL Distribution Advisor](../sql/distribution-advisor.md) to get recommendations on the distrbution method suited for your workloads.
+> Use the [Azure Synapse Toolkit](https://github.com/microsoft/Azure_Synapse_Toolbox/tree/master/TSQL_Queries/TempDB) to monitor `tempdb` using T-SQL queries.
If you have a query that is consuming a large amount of memory or have received an error message related to the allocation of `tempdb`, it could be due to a very large [CREATE TABLE AS SELECT (CTAS)](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) or [INSERT SELECT](/sql/t-sql/statements/insert-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) statement running that is failing in the final data movement operation. This can usually be identified as a ShuffleMove operation in the distributed query plan right before the final INSERT SELECT. Use [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) to monitor ShuffleMove operations.
update-center Assessment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/assessment-options.md
Update management center (preview) provides you the flexibility to assess the st
## Periodic assessment
- Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by update management center (preview). We recommend that you enable this property on your machines as it allows update management center (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You must register this [feature in your Azure subscription](enable-machines.md#periodic-assessment). You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md).
+ Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by update management center (preview). We recommend that you enable this property on your machines as it allows update management center (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md).
:::image type="content" source="media/updates-maintenance/periodic-assessment-inline.png" alt-text="Screenshot showing periodic assessment option." lightbox="media/updates-maintenance/periodic-assessment-expanded.png":::
update-center Enable Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/enable-machines.md
- Title: Enable update management center (preview) for periodic assessment and scheduled patching
-description: This article describes how to enable the periodic assessment and scheduled patching features using update management center (preview) for Windows and Linux machines running on Azure or outside of Azure connected to Azure Arc-enabled servers.
--- Previously updated : 04/21/2022---
-# How to enable update management center (preview)
-
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-
-This article describes how to enable update management center (preview) for periodic assessment and scheduled patching using one of the following methods:
--- From the Azure portal-- Using Azure PowerShell-- Using the Azure CLI-- Using the Azure REST API-
-Register the periodic assessment and scheduled patching feature resource providers in your Azure subscription, as detailed below, to enable update management center (preview) functionality. After your register for the features, access the preview link: **https://aka.ms/umc-preview**.
-
-## Prerequisites
--- Azure subscription - if you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- Your account must be a member of the Azure [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor) role in the subscription.--- One or more [Azure virtual machines](../virtual-machines/index.yml), or physical or virtual machines managed by [Arc-enabled servers](../azure-arc/servers/overview.md).--- Ensure that you meet all [prerequisites for update management center](overview.md#prerequisites)-
-## Periodic assessment
-
-The following section describes how to enable periodic assessment feature for your subscription using Azure portal, PowerShell, CLI and REST API:
-
-### [Azure portal](#tab/portal-periodic)
-
-**For Arc-enabled servers**, no onboarding is required for using periodic assessment feature.
-
-**For Azure machines**, your subscription needs to be allowlisted for preview feature **InGuestAutoAssessmentVMPreview**.
-
-Follow the steps below to register for the *InGuestAutoAssessmentVMPreview* feature:
-
-1. Sign in to the Update management center (preview) portal link: **https://aka.ms/umc-preview**.
-
-1. In the Azure portal menu, search for **Preview features** and select it from the available options.
-
-1. In the **Preview features** page, search for **InGuestAutoAssessmentVMPreview**.
-
-1. Select **Virtual Machine Guest Automatic Patch Assessment Preview** from the list.
-
-1. In the **Virtual Machine Guest Automatic Patch Assessment Preview** pane, select **Register** to register the provider with your subscription.
-
-After your register for the above feature, go to update management center (preview) portal link: **https://aka.ms/umc-preview**.
--
-### [PowerShell](#tab/ps-periodic)
-
-**Arc-enabled servers** - No onboarding is required to use periodic assessment feature.
-
-**Azure VMs**
-For Azure VMs, to register the resource provider, use:
-
-```azurepowershell
-Register-AzProviderPreviewFeature -Name InGuestAutoAssessmentVMPreview -ProviderNamespace Microsoft.Compute
-```
-
-### [CLI](#tab/cli-periodic)
-
-To enable periodic assessment feature in Azure for your subscription use the Azure CLI [az feature register](/cli/azure/feature#az_feature_register) command.
-
-**Arc-enabled servers** - No onboarding is required for using Periodic assessment feature.
-
-**Azure machines** - To register the resource provider, use:
-
-```azurecli
-az feature register --namespace Microsoft.Compute --name InGuestAutoAssessmentVMPreview
-```
-
-### [REST API](#tab/rest-periodic-assessment)
-
-To enable periodic assessment feature in Azure for your subscription use the [Azure REST API](/rest/api/azure).
-
->[!NOTE]
-> This option is only applicable to Azure VMs.
-
-To register a resource provider, use:
-
-```rest
-POST on `/subscriptions/subscriptionId/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestAutoAssessmentVMPreview/register?api-version=2015-12-01`
-```
-
-Replace the value `subscriptionId` with the ID of the target subscription.
---
->[!NOTE]
-> This preview feature will be auto-approved.
--
-## Next steps
-
-* [View updates for single machine](view-updates.md)
-* [Deploy updates now (on-demand) for single machine](deploy-updates.md)
-* [Schedule recurring updates](scheduled-patching.md)
-* [Manage update settings via Portal](manage-update-settings.md)
-* [Manage multiple machines using update management center](manage-multiple-machines.md)
update-center Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md
To configure update settings on your machines on a single VM, follow these steps
The following update settings are available for configuration for the selected machine(s): - **Periodic assessment** - enable periodic **Assessment** to run every 24 hours.
- >[!NOTE]
- > You must [register for the periodic assessement](./enable-machines.md?branch=release-updatecenterv2-publicpreview&tabs=portal-periodic%2cps-periodic-assessment%2ccli-periodic-assessment%2crest-periodic-assessment) in your Azure subscription to enable this feature.
-
+
- **Hot patching** - for Azure VMs, you can enable [hot patching](../automanage/automanage-hotpatch.md) on supported Windows Server Azure Edition Virtual Machines (VMs) don't require a reboot after installation. You can use update management center (preview) to install patches with other patch classifications or to schedule patch installation when you require immediate critical patch deployment. - **Patch orchestration** option provides the following:
To configure update settings on your machines on a single VM, follow these steps
# [From a selected VM](#tab/singlevm-schedule-home)
->[!NOTE]
-> **For Azure machines**, your subscription needs to be allowlisted for preview feature. For more information, see
-[On-boarding preview features](enable-machines.md)
- 1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**. 1. In **Updates**, select **Go to Updates using Update Center**.
update-center Periodic Assessment At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/periodic-assessment-at-scale.md
This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, update management center (preview) fetches updates on your machine once every 24 hours.
->[!NOTE]
-> You must [register for the periodic assessement](./enable-machines.md?branch=release-updatecenterv2-publicpreview&tabs=portal-periodic%2cps-periodic-assessment%2ccli-periodic-assessment%2crest-periodic-assessment) in your Azure subscription to enable this feature.
## Enable Periodic assessment for your Azure machines using Policy 1. Go to **Policy** from the Azure portal and under **Authoring**, go to **Definitions**.
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
The following issues affect the preview version of Azure Virtual Desktop for Azu
- Azure Stack HCI host pools don't currently support the following Azure Virtual Desktop features:
- - [Azure Monitor for Azure Virtual Desktop](azure-monitor.md)
+ - [Azure Virtual Desktop Insights](insights.md)
- [Session host scaling with Azure Automation](set-up-scaling-script.md) - [Autoscale plan](autoscale-scaling-plan.md) - [Start VM On Connect](start-virtual-machine-connect.md)
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
Title: Compare the features of the Remote Desktop clients for Azure Virtual Desk
description: Compare the features of the Remote Desktop clients when connecting to Azure Virtual Desktop. Previously updated : 09/26/2022 Last updated : 11/22/2022
When you enable USB port redirection, all USB devices attached to USB ports are
| Redirection | Windows Desktop | Microsoft Store client | Android or Chrome OS | iOS or iPadOS | macOS | Web client | |--|--|--|--|--|--|--|
-| Cameras | X | | | X | X | |
+| Cameras | X | | X | X | X | |
| Clipboard | X | X | Text | Text, images | X | Text | | Local drive/storage | X | | X | X | X | X\* |
-| Location | X | | | | | |
+| Location | X (Windows 11 only) | | | | | |
| Microphones | X | X | X | X | X | X | | Printers | X | | | | X\*\* (CUPS only) | PDF print | | Scanners | X | | | | | |
virtual-desktop Connection Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-latency.md
Azure Virtual Desktop uses [Azure Front Door](https://azure.microsoft.com/servic
- To get started with your Azure Virtual Desktop deployment, check out [our tutorial](./create-host-pools-azure-marketplace.md). - To learn about bandwidth requirements for Azure Virtual Desktop, see [Understanding Remote Desktop Protocol (RDP) Bandwidth Requirements for Azure Virtual Desktop](rdp-bandwidth.md). - To learn about Azure Virtual Desktop network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md).-- Learn how to use Azure Monitor at [Get started with Azure Monitor for Azure Virtual Desktop](azure-monitor.md).
+- Learn how to use Azure Virtual Desktop Insights at [Get started with Azure Virtual Desktop Insights](insights.md).
virtual-desktop Diagnostics Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/diagnostics-log-analytics.md
Connections that don't reach Azure Virtual Desktop won't show up in diagnostics
Azure Monitor lets you analyze Azure Virtual Desktop data and review virtual machine (VM) performance counters, all within the same tool. This article will tell you more about how to enable diagnostics for your Azure Virtual Desktop environment. >[!NOTE]
->To learn how to monitor your VMs in Azure, see [Monitoring Azure virtual machines with Azure Monitor](../azure-monitor/vm/monitor-vm-azure.md). Also, make sure to review the [Azure Monitor glossary](./azure-monitor-glossary.md) for a better understanding of your user experience on the session host.
+>To learn how to monitor your VMs in Azure, see [Monitoring Azure virtual machines with Azure Monitor](../azure-monitor/vm/monitor-vm-azure.md). Also, make sure to review the [Azure Virtual Desktop Insights glossary](./insights-glossary.md) for a better understanding of your user experience on the session host.
## Before you get started
virtual-desktop Insights Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-costs.md
+
+ Title: Estimate Azure Virtual Desktop monitoring costs - Azure
+description: How to estimate costs and pricing for using Azure Virtual Desktop Insights.
++ Last updated : 03/29/2021++++
+# Estimate Azure Virtual Desktop monitoring costs
+
+Azure Virtual Desktop uses the Azure Monitor Logs service to collect, index, and store data generated by your environment. Because of this, the Azure Monitor pricing model is based on the amount of data that's brought into and processed (or "ingested") by your Log Analytics workspace in gigabytes per day. The cost of a Log Analytics workspace isn't only based on the volume of data collected, but also which Azure payment plan you've selected and how long you choose to store the data your environment generates.
+
+This article will explain the following things to help you understand how pricing in Azure Monitor works:
+
+- How to estimate data ingestion and storage costs upfront before you enable this feature
+- How to measure and control your ingestion and storage to reduce costs when using this feature
+
+>[!NOTE]
+> All sizes and pricing listed in this article are just examples to demonstrate how estimation works. For a more accurate assessment based on your Azure Monitor Log Analytics pricing model and Azure region, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Estimate data ingestion and storage costs
+
+We recommend you use a predefined set of data written as logs in your Log Analytics workspace. In the following example estimates, we'll look at billable data in the default configuration
+
+The predefined datasets for Azure Virtual Desktop Insights include:
+
+- Performance counters from the session hosts
+- Windows Event Logs from the session hosts
+- Azure Virtual Desktop diagnostics from the service infrastructure
+
+Your data ingestion and storage costs depend on your environment size, health, and usage. The example estimates we'll use in this article to calculate the cost ranges you can expect are based on healthy virtual machines running light to power usage, based on our [virtual machine sizing guidelines](/windows-server/remote/remote-desktop-services/virtual-machine-recs), to calculate a range of data ingestion and storage costs you could expect.
+
+The light usage VM we'll be using in our example includes the following components:
+
+- 4 vCPUs, 1 disk
+- 16 sessions per day
+- An average session duration of 2 hours (120 minutes)
+- 100 processes per session
+
+The power usage VM we'll be using in our example includes the following components:
+
+- 6 vCPUs, 1 disk
+- 6 sessions per day
+- Average session duration of 4 hours (240 minutes)
+- 200 processes per session
+
+## Estimating performance counter ingestion
+
+Performance counters show how the system resources are performing. Performance counter data ingestion depends on your environment size and usage. In most cases, performance counters should make up 80 to 99% of your data ingestion for Azure Virtual Desktop Insights.
+
+Before you start estimating, itΓÇÖs important that you understand that each performance counter sends data at a specific frequency. We set a default sample rate-per-minute (you can also edit this rate in your settings), but that rate will be applied at different multiplying factors depending on the counter. The following factors affect the rate:
+
+- For the per virtual machine (VM) factor, each counter sends data per VM in your environment at the default sample rate per minute while the VM is running. You can estimate the number of records these counters send per day by multiplying the default sample rate per minute by the number of VMs in your environment, then multiplying that number by the average VM running time per day.
+
+ To summarize:
+
+ Default sample rate per minute × number of CPU cores in the VM SKU × number of VMs × average VM running time per day = number of records sent per day
+
+- For the per CPU factor, each counter sends at the default sample rate per minute per vCPU in each VM in your environment while the VM is running. You can estimate the number of records the counters will send per day by multiplying the default sample rate per minute by the number of CPU cores in the VM SKU, then multiplying that number by the number of minutes the VM runs and the number of VMs in your environment.
+
+ To summarize:
+
+ Default sample rate per minute × number of CPU cores in the VM SKU × number of minutes the VM runs × number of VMs = number of records sent per day
+
+- For the per disk factor, each counter sends data at the default sample rate for each disk in each VM in your environment. The number of records these counters will send per day equals the default sample rate per minute multiplied by number of disks in the VM SKU, multiplied by 60 minutes per hour, and finally multiplied by the average active hours for a VM.
+
+ To summarize:
+
+ Default sample rate per minute × number of disks in VM SKU × 60 minutes per hour × number of VMs × average VM running time per day = number of records sent per day
+
+- For the per session factor, each counter sends data at the default sample rate for each session in your environment while the session is connected. You can estimate the number of records these counters will send per day can by multiplying the default sample rate per minute by the average number of sessions per day and the average session duration.
+
+ To summarize:
+
+ Default sample rate per minute × sessions per day × average session duration = number of records sent per day
+
+- For the per-process factor, each counter sends data at the default rate for each process in each session in your environment. You can estimate the number of records these counters will send per day by multiplying the default sample rate per minute by the average number of sessions per day, then multiplying that by the average session duration and the average number of processes per session.
+
+ To summarize:
+
+ Default sample rate per minute × sessions per day × average session duration × average number of processes per session = number of records sent per day
+
+The following table lists the 20 performance counters Azure Virtual Desktop Insights collects and their default rates:
+
+| Counter name | Default sample rate | Frequency factor |
+|--|||
+| Logical Disk(C:)\\% free space | 60 seconds | Per disk |
+| Logical Disk(C:)\\Avg. Disk Queue Length | 30 seconds | Per disk |
+| Logical Disk(C:)\\Avg. Disk sec/Transfer | 60 seconds | Per disk |
+| Logical Disk(C:)\\Current Disk Queue Length | 30 seconds | Per disk |
+| Memory(\*)\\Available Mbytes | 30 seconds | Per VM |
+| Memory(\*)\\Page Faults/sec | 30 seconds | Per VM |
+| Memory(\*)\\Pages/sec | 30 seconds | Per VM |
+| Memory(\*)\\% Committed Bytes in Use | 30 seconds | Per VM |
+| PhysicalDisk(\*)\\Avg. Disk Queue Length | 30 seconds | Per disk |
+| PhysicalDisk(\*)\\Avg. Disk sec/Read | 30 seconds | Per disk |
+| PhysicalDisk(\*)\\Avg. Disk sec/Transfer | 30 seconds | Per disk |
+| PhysicalDisk(\*)\\Avg. Disk sec/Write | 30 seconds | Per disk |
+| Processor Information(_Total)\\% Processor Time | 30 seconds | Per core/CPU |
+| Terminal Services(\*)\\Active Sessions | 60 seconds | Per VM |
+| Terminal Services(\*)\\Inactive Sessions | 60 seconds | Per VM |
+| Terminal Services(\*)\\Total Sessions | 60 seconds | Per VM |
+| User Input Delay per Process(\*)\\Max Input Delay | 30 seconds | Per process |
+| User Input Delay per Session(\*)\\Max Input Delay | 30 seconds | Per session |
+| RemoteFX Network(\*)\\Current TCP RTT | 30 seconds | Per VM |
+| RemoteFX Network(\*)\\Current UDP Bandwidth | 30 seconds | Per VM |
+
+If we estimate each record size to be 200 bytes, an example VM running a light workload on the default sample rate would send roughly 90 megabytes of performance counter data per day per VM. Meanwhile, an example VM running a power workload would send roughly 130 megabytes of performance counter data per day per VM. However, record size and environment usage can vary, so the megabytes per day your deployment uses may be different.
+
+To learn more about input delay performance counters, see [User Input Delay performance counters](/windows-server/remote/remote-desktop-services/rds-rdsh-performance-counters/).
+
+## Estimating Windows Event Log ingestion
+
+Windows Event Logs are data sources collected by Log Analytics agents on Windows virtual machines. You can collect events from standard logs like System and Application as well as custom logs created by applications you need to monitor.
+
+These are the default Windows Events for Azure Virtual Desktop Insights:
+
+- Application
+- Microsoft-Windows-TerminalServices-RemoteConnectionManager/Admin
+- Microsoft-Windows-TerminalServices-LocalSessionManager/Operational
+- System
+- Microsoft-FSLogix-Apps/Operational
+- Microsoft-FSLogix-Apps/Admin
+
+Windows Events send whenever the terms of the event are met in the environment. Machines in healthy states will send fewer events than machines in unhealthy states. Since event count is unpredictable, we use a range of 1,000 to 10,000 events per VM per day based on examples from healthy environments for this estimate. For example, if we estimate each event record size in this example to be 1,500 bytes, this comes out to roughly 2 to 15 megabytes of event data per day for the specified environment.
+
+To learn more about Windows events, see [Windows event records properties](../azure-monitor/agents/data-sources-windows-events.md).
+
+## Estimating diagnostics ingestion
+
+The diagnostics service creates activity logs for both user and administrative actions.
+
+These are the names of the activity logs the diagnostic counter tracks:
+
+- WVDCheckpoints
+- WVDConnections
+- WVDErrors
+- WVDFeeds
+- WVDManagement
+- WVDAgentHealthStatus
+
+The service sends diagnostic information whenever the environment meets the terms required to make a record. Since diagnostic record count is unpredictable, we use a range of 500 to 1000 events per VM per day based on examples from healthy environments for this estimate.
+
+For example, if we estimate each diagnostic record size in this example to be 200 bytes, then the total ingested data would be less than 1 MB per VM per day.
+
+To learn more about the activity log categories, see [Azure Virtual Desktop diagnostics](diagnostics-log-analytics.md).
+
+## Estimating total costs
+
+Finally, let's estimate the total cost. In this example, let's say we come up with the following results based on the example values in the previous sections:
+
+| Data source | Size estimate per day (in megabytes) |
+|-||
+| Performance counters | 90-130 |
+| Events | 2-15 |
+| Azure Virtual Desktop diagnostics | \< 1 |
+
+In this example, the total ingested data for Azure Virtual Desktop Insights is between 92 to 145 megabytes per VM per day. In other words, every 31 days, each VM ingests roughly 3 to 5 gigabytes of data.
+
+Using the default Pay-as-you-go model for [Log Analytics pricing](https://azure.microsoft.com/pricing/details/monitor/), you can estimate the Azure Monitor data collection and storage cost per month. Depending on your data ingestion, you may also consider the Capacity Reservation model for Log Analytics pricing.
+
+## Manage your data ingestion to reduce costs
+
+This section will explain how to measure and manage data ingestion to reduce costs.
+
+To learn about managing rights and permissions to the workbook, see [Access control](../azure-monitor/visualize/workbooks-overview.md#access-control).
+
+>[!NOTE]
+>Removing data points will impact their corresponding visuals in Azure Virtual Desktop Insights.
+
+### Log Analytics settings
+
+Here are some suggestions to optimize your Log Analytics settings to manage data ingestion:
+
+- Use a designated Log Analytics workspace for your Azure Virtual Desktop resources to ensure that Log Analytics only collects performance counters and events for the virtual machines in your Azure Virtual Desktop deployment.
+- Adjust your Log Analytics storage settings to manage costs. You can reduce the retention period, evaluate whether a fixed storage pricing tier would be more cost-effective, or set boundaries on how much data you can ingest to limit impact of an unhealthy deployment. To learn more, see [Azure Monitor Logs pricing details](../azure-monitor/logs/cost-logs.md).
+
+### Remove excess data
+
+Our default configuration is the only set of data we recommend for Azure Virtual Desktop Insights. You always have the option to add additional data points and view them in the Host Diagnostics: Host browser or build custom charts for them, however added data will increase your Log Analytics cost. These can be removed for cost savings.
+
+### Measure and manage your performance counter data
+
+Your true monitoring costs will depend on your environment size, usage, and health. To understand how to measure data ingestion in your Log Analytics workspace, see [Analyze usage in Log Analytics workspace](../azure-monitor/logs/analyze-usage.md).
+
+The performance counters the session hosts use will probably be your largest source of ingested data for Azure Virtual Desktop Insights. The following custom query template for a Log Analytics workspace can track frequency and megabytes ingested per performance counter over the last day:
+
+```azure
+let WVDHosts = dynamic(['Host1.MyCompany.com', 'Host2.MyCompany.com']);
+Perf
+| where TimeGenerated > ago(1d)
+| where Computer in (WVDHosts)
+| extend PerfCounter = strcat(ObjectName, ":", CounterName)
+| summarize Records = count(TimeGenerated), InstanceNames = dcount(InstanceName), Bytes=sum(_BilledSize) by PerfCounter
+| extend Billed_MBytes = Bytes / (1024 * 1024), BytesPerRecord = Bytes / Records
+| sort by Records desc
+```
+
+>[!NOTE]
+>Make sure to replace the template's placeholder values with the values your environment uses, otherwise the query won't work.
+
+This query will show all performance counters you have enabled on the environment, not just the default ones for Azure Virtual Desktop Insights. This information can help you understand which areas to target to reduce costs, like reducing a counterΓÇÖs frequency or removing it altogether.
+
+You can also reduce costs by removing performance counters. To learn how to remove performance counters or edit existing counters to reduce their frequency, see [Configuring performance counters](../azure-monitor/agents/data-sources-performance-counters.md#configuring-performance-counters).
+
+### Manage Windows Event Logs
+
+Windows Events are unlikely to cause a spike in data ingestion when all hosts are healthy. An unhealthy host can increase the number of events sent to the log, but the information can be critical to fixing the host's issues. We recommend keeping them. To learn more about how to manage Windows Event Logs, see [Configuring Windows Event logs](../azure-monitor/agents/data-sources-windows-events.md#configure-windows-event-logs).
+
+### Manage diagnostics
+
+Azure Virtual Desktop diagnostics should make up less than 1% of your data storage costs, so we don't recommend removing them. To manage Azure Virtual Desktop diagnostics, [Use Log Analytics for the diagnostics feature](diagnostics-log-analytics.md).
+
+## Next steps
+
+Learn more about Azure Virtual Desktop Insights at these articles:
+
+- [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md).
+- Use the [glossary](insights-glossary.md) to learn more about terms and concepts.
+- If you encounter a problem, check out our [troubleshooting guide](troubleshoot-insights.md) for help.
+- Check out [Monitoring usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) to learn more about managing your monitoring costs.
virtual-desktop Insights Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-glossary.md
+
+ Title: Monitor Azure Virtual Desktop glossary - Azure
+description: A glossary of terms and concepts related to Azure Virtual Desktop Insights.
+++++ Last updated : 10/26/2022+++
+# Azure Virtual Desktop Insights glossary
+
+This article lists and briefly describes key terms and concepts related to Azure Virtual Desktop Insights.
+
+## Alerts
+
+Any active Azure Monitor alerts that you've configured on the subscription and classified as [severity 0](#severity-0-alerts) will appear in the Overview page. To learn how to set up alerts, see [Azure Monitor Log Alerts](../azure-monitor/alerts/alerts-log.md).
+
+## Available sessions
+
+Available sessions shows the number of available sessions in the host pool. The service calculates this number by multiplying the number of virtual machines (VMs) by the maximum number of sessions allowed per virtual machine, then subtracting the total sessions.
+
+## Client operating system (OS)
+
+The client operating system (OS) shows which version of the OS end-users accessing Azure Virtual Desktop resources are currently using. The client OS also shows which version of the web (HTML) client and the full Remote Desktop client the users have. For a full list of Windows OS versions, see [Operating System Version](/windows/win32/sysinfo/operating-system-version).
+
+>[!IMPORTANT]
+>Windows 7 support will end on January 10, 2023. The client OS version for Windows 7 is Windows 6.1.
+
+## Connection success
+
+This item shows connection health. "Connection success" means that the connection could reach the host, as confirmed by the stack on that virtual machine. A failed connection means that the connection couldn't reach the host.
+
+## Daily active users (DAU)
+
+The total number of users that have started a session in the last 24 hours.
+
+## Daily alerts
+
+The total number of alerts triggered each day.
+
+## Daily connections and reconnections
+
+The total number of connections and reconnections started or completed within the last 24 hours.
+
+## Daily connected hours
+
+The total number of hours spent connected to a session across users in the last 24 hours.
+
+## Diagnostics and errors
+
+When an error or alert appears in Azure Virtual Desktop Insights, it's categorized by three things:
+
+- Activity type: this category is how the error is categorized by Azure Virtual Desktop diagnostics. The categories are management activities, feeds, connections, host registrations, errors, and checkpoints. Learn more about these categories at [Use Log Analytics for the diagnostics feature](diagnostics-log-analytics.md).
+
+- Kind: this category shows the error's location.
+
+ - Errors marked as "service" or "ServiceError = TRUE" happened in the Azure Virtual Desktop service.
+ - Errors marked as "deployment" or tagged "ServiceError = FALSE" happened outside of the Azure Virtual Desktop service.
+ - To learn more about the ServiceError tag, see [Common error scenarios](./troubleshoot-set-up-overview.md).
+
+- Source: this category gives a more specific description of where the error happened.
+
+ - Diagnostics: the service role responsible for monitoring and reporting service activity to let users observe and diagnose deployment issues.
+
+ - RDBroker: the service role responsible for orchestrating deployment activities, maintaining the state of objects, validating authentication, and more.
+
+ - RDGateway: the service role responsible for handling network connectivity between end-users and virtual machines.
+
+ - RDStack: a software component that's installed on your VMs to allow them to communicate with the Azure Virtual Desktop service.
+
+ - Client: software running on the end-user machine that provides the interface to the Azure Virtual Desktop service. It displays the list of published resources and hosts the Remote Desktop connection once you've made a selection.
+
+Each diagnostics issue or error includes a message that explains what went wrong. To learn more about troubleshooting errors, see [Identify and diagnose Azure Virtual Desktop issues](./troubleshoot-set-up-overview.md).
+
+## Input delay
+
+"Input delay" in Azure Virtual Desktop Insights means the input delay per process performance counter for each session. In the host performance page at [aka.ms/azmonwvdi](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/workbooks), this performance counter is configured to send a report to the service once every 30 seconds. These 30-second intervals are called "samples," and the report the worst case in that window. The median and p95 values reflect the median and 95th percentile across all samples.
+
+Under **Input delay by host**, you can select a session host row to filter all other visuals in the page to that host. You can also select a process name to filter the median input delay over time chart.
+
+We put delays in the following categories:
+
+- Good: below 150 milliseconds.
+- Acceptable: 150-500 milliseconds.
+- Poor: 500-2,000 milliseconds (below 2 seconds).
+- Bad: over 2,000 milliseconds (2 seconds and up).
+
+To learn more about how the input delay counter works, see [User Input Delay performance counters](/windows-server/remote/remote-desktop-services/rds-rdsh-performance-counters/).
+
+## Monthly active users (MAU)
+
+The total number of users that have started a session in the last 28 days. If you store data for 30 days or less, you may see lower-than-expected MAU and Connection values during periods where you have fewer than 28 days of data available.
+
+## Performance counters
+
+Performance counters show the performance of hardware components, operating systems, and applications.
+
+The following table lists the recommended performance counters and time intervals that Azure Monitor uses for Azure Virtual Desktop:
+
+|Performance counter name|Time interval|
+|||
+|Logical Disk(C:)\\Avg. Disk Queue Length|30 seconds|
+|Logical Disk(C:)\\Avg. Disk sec/Transfer|60 seconds|
+|Logical Disk(C:)\\Current Disk Queue Length|30 seconds|
+|Memory(\*)\\Available Mbytes|30 seconds|
+|Memory(\*)\\Page Faults/sec|30 seconds|
+|Memory(\*)\\Pages/sec|30 seconds|
+|Memory(\*)\\% Committed Bytes in Use|30 seconds|
+|PhysicalDisk(\*)\\Avg. Disk Queue Length|30 seconds|
+|PhysicalDisk(\*)\\Avg. Disk sec/Read|30 seconds|
+|PhysicalDisk(\*)\\Avg. Disk sec/Transfer|30 seconds|
+|PhysicalDisk(\*)\\Avg. Disk sec/Write|30 seconds|
+|Processor Information(_Total)\\% Processor Time|30 seconds|
+|Terminal Services(\*)\\Active Sessions|60 seconds|
+|Terminal Services(\*)\\Inactive Sessions|60 seconds|
+|Terminal Services(\*)\\Total Sessions|60 seconds|
+|\*User Input Delay per Process(\*)\\Max Input Delay|30 seconds|
+|\*User Input Delay per Session(\*)\\Max Input Delay|30 seconds|
+|RemoteFX Network(\*)\\Current TCP RTT|30 seconds|
+|RemoteFX Network(\*)\\Current UDP Bandwidth|30 seconds|
+
+To learn more about how to read performance counters, see [Configuring performance counters](../azure-monitor/agents/data-sources-performance-counters.md).
+
+To learn more about input delay performance counters, see [User Input Delay performance counters](/windows-server/remote/remote-desktop-services/rds-rdsh-performance-counters/).
+
+## Potential connectivity issues
+
+Potential connectivity issues shows the hosts, users, published resources, and clients with a high connection failure rate. Once you choose a "report by" filter, you can evaluate the issue's severity by checking the values in these columns:
+
+- Attempts (number of connection attempts)
+- Resources (number of published apps or desktops)
+- Hosts (number of VMs)
+- Clients
+
+For example, if you select the **By user** filter, you can check to see each user's connection attempts in the **Attempts** column.
+
+If you notice that a connection issue spans multiple hosts, users, resources, or clients, it's likely that the issue affects the whole system. If it doesn't, it's a smaller issue that lower priority.
+
+You can also select entries to view additional information. You can view which hosts, resources, and client versions were involved with the issue. The display will also show any errors reported during the connection attempts.
+
+## Round-trip time (RTT)
+
+Round-trip time (RTT) is an estimate of the connection's round-trip time between the end-userΓÇÖs location and the session host's Azure region. To see which locations have the best latency, look up your desired location in the [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/).
+
+## Session history
+
+The **Sessions** item shows the status of all sessions, connected and disconnected. **Idle sessions** only shows the disconnected sessions.
+
+## Severity 0 alerts
+
+The most urgent items that you need to take care of right away. If you don't address these issues, they could cause your Azure Virtual Desktop deployment to stop working.
+
+## Time to connect
+
+Time to connect is the time between when a user opens a resource to start their session and when their desktop has loaded and is ready to use. For example, for RemoteApps, this is the time it takes to launch the application.
+
+Time to connect has two stages:
+
+- Connection, which is how long it takes for the Azure service to route the user to a session host.
+- "Logon," which is how long it takes for the service to perform tasks related to signing in the user and establishing the session on the session host.
+
+When monitoring time to connect, keep in mind the following things:
+
+- Time to connect is measured with the following checkpoints from Azure Virtual Desktop service data:
+
+ - Begins: [WVDConnection](/azure/azure-monitor/reference/tables/wvdconnections) state = started
+
+ - Ends: [WVDCheckpoints](/azure/azure-monitor/reference/tables/wvdcheckpoints) Name = ShellReady (desktops); Name = first app launch for RemoteApp (RdpShellAppExecuted)
+
+ For example, the time for a desktop experience to launch would be measured based on how long it takes to launch Windows Explorer (explorer.exe).
+
+- Establishing new sessions usually takes longer than reestablishing connections to existing sessions due to differences in the "logon" process for new and established connections.
+
+- The time it takes for the user to provide credentials is subtracted from their time to connect to account for situations where a user either takes a while to enter credentials or use alternative authentication methods to sign in.
+
+When troubleshooting a high time to connect, Azure Monitor will break down total connection time data into four components to help you identify how to reduce sign-in time.
+
+>[!NOTE]
+>The components in this section only show the primary connection stages. These components can run in parallel, which means they won't add up to equal the total time to connect. The total time to connect is a measurement that Azure Monitor determines in a separate process.
+
+The following flowchart shows the four stages of the sign-in process:
+
+
+The flowchart shows the following four components:
+
+- User route: the time it takes from when the user selects the Azure Virtual Desktop icon to launch a session to when the service identifies a host to connect to. High network load, high service load, or unique network traffic routing can lead to high routing times. To troubleshoot user route issues, look at your network paths.
+
+- Stack connected: the time it takes from when the service resolves a target session host for the user to when the service establishes a connection between the session host and the userΓÇÖs remote client. Like user routing, the network load, server load, or unique network traffic routing can affect connection time. For this component, you'll also need to pay attention to your network routing. To reduce connection time, make sure you've appropriately configured all proxy configurations on both the client and session hosts, and that routing to the service is optimal.
+
+- Logon: the time it takes between when a connection to a host is established to when the shell starts to load. Logon time includes several processes that can contribute to high connection times. You can view data for the "logon" stage in Insights to see if there are unexpected peaks in average times.
+
+ The "logon" process is divided into four stages:
+
+ - Profiles: the time it takes to load a userΓÇÖs profile for new sessions. How long loading takes depends on user profile size or the user profile solutions you're using (such as User Experience Virtualization). If you're using a solution that depends on network-stored profiles, excess latency can also lead to longer profile loading times.
+
+ - Group Policy Objects (GPOs): the time it takes to apply group policies to new sessions. A spike in this area of the data is a sign that you have too many group policies, the policies take too long to apply, or the session host is experiencing resource issues. One thing you can do to optimize processing times is make sure the domain controller is close to session hosts as possible.
+
+ - Shell Start: the time it takes to launch the shell (usually explorer.exe).
+
+ - FSLogix (Frxsvc): the time it takes to launch FSLogix in new sessions. A long launch time may indicate issues with the shares used to host the FSLogix user profiles. To troubleshoot these issues, make sure the shares are collocated with the session hosts and appropriately scaled for the average number of users signing in to the hosts. Another area you should look at is profile size. Large profile sizes can slow down launch times.
+
+- Shell start to shell ready: the time from when the shell starts to load to when it's fully loaded and ready for use. Delays in this phase can be caused by session host overload (high CPU, memory, or disk activity) or configuration issues.
+
+## User report
+
+The user report page lets you view a specific userΓÇÖs connection history and diagnostic information. Each user report shows usage patterns, user feedback, and any errors users have encountered during their sessions. Most smaller issues can be resolved with user feedback. If you need to dig deeper, you can also filter information about a specific connection ID or period of time.
+
+## Users per core
+
+This is the number of users in each virtual machine core. Tracking the
+maximum number of users per core over time can help you identify whether the
+environment consistently runs at a high, low, or fluctuating number of users per
+core. Knowing how many users are active will help you efficiently resource and scale the environment.
+
+## Windows Event Logs
+
+Windows Event Logs are data sources collected by Log Analytics agents on Windows virtual machines. You can collect events from standard logs like System and Application as well as custom logs created by applications you need to monitor.
+
+The following table lists the required Windows Event Logs for Azure Virtual Desktop Insights:
+
+|Event name|Event type|
+|||
+|Application|Error and Warning|
+|Microsoft-Windows-TerminalServices-RemoteConnectionManager/Admin|Error, Warning, and Information|
+|Microsoft-Windows-TerminalServices-LocalSessionManager/Operational|Error, Warning, and Information|
+|System|Error and Warning|
+| Microsoft-FSLogix-Apps/Operational|Error, Warning, and Information|
+|Microsoft-FSLogix-Apps/Admin|Error, Warning, and Information|
+
+To learn more about Windows Event Logs, see [Windows Event records properties](../azure-monitor/agents/data-sources-windows-events.md#configure-windows-event-logs).
+
+## Next steps
+
+- To get started, see [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md).
+- To estimate, measure, and manage your data storage costs, see [Estimate Azure Monitor costs](insights-costs.md).
+- If you encounter a problem, check out our [troubleshooting guide](troubleshoot-insights.md) for help and known issues.
++
+You can also set up Azure Advisor to help you figure out how to resolve or prevent common issues. Learn more at [Introduction to Azure Advisor](../advisor/advisor-overview.md).
+
+If you need help or have any questions, check out our community resources:
+
+- Ask questions or make suggestions to the community at the [Azure Virtual Desktop TechCommunity](https://techcommunity.microsoft.com/t5/Windows-Virtual-Desktop/bd-p/WindowsVirtualDesktop).
+
+- To learn how to leave feedback, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md#report-issues).
+
+- You can also leave feedback for Azure Virtual Desktop at the [Azure Virtual Desktop feedback hub](https://support.microsoft.com/help/4021566/windows-10-send-feedback-to-microsoft-with-feedback-hub-app)
virtual-desktop Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights.md
+
+ Title: Use Monitor Azure Virtual Desktop Monitor - Azure
+description: How to use Azure Virtual Desktop Insights.
++ Last updated : 03/31/2020+++
+# Use Azure Virtual Desktop Insights to monitor your deployment
+
+Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. This topic will walk you through how to set up Azure Virtual Desktop Insights to monitor your Azure Virtual Desktop environments.
+
+## Requirements
+
+Before you start using Azure Virtual Desktop Insights, you'll need to set up the following things:
+
+- All Azure Virtual Desktop environments you monitor must be based on the latest release of Azure Virtual Desktop thatΓÇÖs compatible with Azure Resource Manager.
+- At least one configured Log Analytics Workspace. Use a designated Log Analytics workspace for your Azure Virtual Desktop session hosts to ensure that performance counters and events are only collected from session hosts in your Azure Virtual Desktop deployment.
+- Enable data collection for the following things in your Log Analytics workspace:
+ - Diagnostics from your Azure Virtual Desktop environment
+ - Recommended performance counters from your Azure Virtual Desktop session hosts
+ - Recommended Windows Event Logs from your Azure Virtual Desktop session hosts
+
+ The data setup process described in this article is the only one you'll need to monitor Azure Virtual Desktop. You can disable all other items sending data to your Log Analytics workspace to save costs.
+
+Anyone monitoring Azure Virtual Desktop Insights for your environment will also need the following read-access permissions:
+
+- Read-access to the Azure resource groups that hold your Azure Virtual Desktop resources.
+- Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts.
+- Read access to the Log Analytics workspace. In the case that multiple Log Analytics workspaces are used, read access should be granted to each to allow viewing data.
+
+> [!NOTE]
+> Read access only lets admins view data. They'll need different permissions to manage resources in the Azure Virtual Desktop portal.
+
+## Open Azure Virtual Desktop Insights
+
+You can open Azure Virtual Desktop Insights with one of the following methods:
+
+- Go to [aka.ms/avdi](https://aka.ms/avdi).
+- Search for and select **Azure Virtual Desktop** from the Azure portal, then select **Insights**.
+- Search for and select **Azure Monitor** from the Azure portal. Select **Insights Hub** under **Insights**, then select **Azure Virtual Desktop**.
+Once you have the page open, enter the **Subscription**, **Resource group**, **Host pool**, and **Time range** of the environment you want to monitor.
+
+>[!NOTE]
+>Azure Virtual Desktop currently only supports monitoring one subscription, resource group, and host pool at a time. If you can't find the environment you want to monitor, see [our troubleshooting documentation](troubleshoot-insights.md) for known feature requests and issues.
+
+## Log Analytics settings
+
+To start using Azure Virtual Desktop Insights, you'll need at least one Log Analytics workspace. Use a designated Log Analytics workspace for your Azure Virtual Desktop session hosts to ensure that performance counters and events are only collected form session hosts in your Azure Virtual Desktop deployment. If you already have a workspace set up, skip ahead to [Set up using the configuration workbook](#set-up-using-the-configuration-workbook). To set one up, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+
+>[!NOTE]
+>Standard data storage charges for Log Analytics will apply. To start, we recommend you choose the pay-as-you-go model and adjust as you scale your deployment and take in more data. To learn more, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Set up using the configuration workbook
+
+If it's your first time opening Azure Virtual Desktop Insights, you'll need set up Azure Virtual Desktop Insights for your Azure Virtual Desktop environment. To configure your resources:
+
+1. Open Azure Virtual Desktop Insights in the Azure portal at [aka.ms/avdi](https://aka.ms/avdi), then select **configuration workbook**.
+2. Select an environment to configure under **Subscription**, **Resource Group**, and **Host Pool**.
+
+The configuration workbook sets up your monitoring environment and lets you check the configuration after you've finished the setup process. It's important to check your configuration if items in the dashboard aren't displaying correctly, or when the product group publishes updates that require new settings.
+
+### Resource diagnostic settings
+
+To collect information on your Azure Virtual Desktop infrastructure, you'll need to enable several diagnostic settings on your Azure Virtual Desktop host pools and workspaces (this is your Azure Virtual Desktop workspace, not your Log Analytics workspace). To learn more about host pools, workspaces, and other Azure Virtual Desktop resource objects, see our [environment guide](environment-setup.md).
+
+You can learn more about Azure Virtual Desktop diagnostics and the supported diagnostic tables at [Send Azure Virtual Desktop diagnostics to Log Analytics](diagnostics-log-analytics.md).
+
+To set your resource diagnostic settings in the configuration workbook:
+
+1. Select the **Resource diagnostic settings** tab in the configuration workbook.
+2. Select **Log Analytics workspace** to send Azure Virtual Desktop diagnostics.
+
+#### Host pool diagnostic settings
+
+To set up host pool diagnostics using the resource diagnostic settings section in the configuration workbook:
+
+1. Under **Host pool**, check to see whether Azure Virtual Desktop diagnostics are enabled. If they aren't, an error message will appear that says "No existing diagnostic configuration was found for the selected host pool." You'll need to enable the following supported diagnostic tables:
+
+ - Checkpoint
+ - Error
+ - Management
+ - Connection
+ - HostRegistration
+ - AgentHealthStatus
+
+ >[!NOTE]
+ > If you don't see the error message, you don't need to do steps 2 through 4.
+
+2. Select **Configure host pool**.
+3. Select **Deploy**.
+4. Refresh the configuration workbook.
+
+#### Workspace diagnostic settings
+
+To set up workspace diagnostics using the resource diagnostic settings section in the configuration workbook:
+
+ 1. Under **Workspace**, check to see whether Azure Virtual Desktop diagnostics are enabled for the Azure Virtual Desktop workspace. If they aren't, an error message will appear that says "No existing diagnostic configuration was found for the selected workspace." You'll need to enable the following supported diagnostics tables:
+
+ - Checkpoint
+ - Error
+ - Management
+ - Feed
+
+ >[!NOTE]
+ > If you don't see the error message, you don't need to do steps 2-4.
+
+2. Select **Configure workspace**.
+3. Select **Deploy**.
+4. Refresh the configuration workbook.
+
+### Session host data settings
+
+To collect information on your Azure Virtual Desktop session hosts, you'll need to install the Log Analytics agent on all session hosts in the host pool, make sure the session hosts are sending to a Log Analytics workspace, and configure your Log Analytics agent settings to collect performance data and Windows Event Logs.
+
+The Log Analytics workspace you send session host data to doesn't have to be the same one you send diagnostic data to. If you have Azure session hosts outside of your Azure Virtual Desktop environment, we recommend having a designated Log Analytics workspace for the Azure Virtual Desktop session hosts.
+
+To set the Log Analytics workspace where you want to collect session host data:
+
+1. Select the **Session host data settings** tab in the configuration workbook.
+2. Select the **Log Analytics workspace** you want to send session host data to.
+
+#### Session hosts
+
+You'll need to install the Log Analytics agent on all session hosts in the host pool and send data from those hosts to your selected Log Analytics workspace. If Log Analytics isn't configured for all the session hosts in the host pool, you'll see a **Session hosts** section at the top of **Session host data settings** with the message "Some hosts in the host pool are not sending data to the selected Log Analytics workspace."
+
+>[!NOTE]
+> If you don't see the **Session hosts** section or error message, all session hosts are set up correctly. Skip ahead to set up instructions for [Workspace performance counters](#workspace-performance-counters). Currently automated deployment is limited to 1000 session hosts or fewer.
+
+To set up your remaining session hosts using the configuration workbook:
+
+1. Select **Add hosts to workspace**.
+2. Refresh the configuration workbook.
+
+>[!NOTE]
+>For larger host pools (> 1000 session hosts), or if there are deployment issues, it is recommended to install the Log Analytics agent at [time of session host creation](../virtual-machines/extensions/oms-windows.md#extension-schema) through the use of an ARM template.
+
+#### Workspace performance counters
+
+You'll need to enable specific performance counters to collect performance information from your session hosts and send it to the Log Analytics workspace.
+
+If you already have performance counters enabled and want to remove them, follow the instructions in [Configuring performance counters](../azure-monitor/agents/data-sources-performance-counters.md). You can add and remove performance counters in the same location.
+
+To set up performance counters using the configuration workbook:
+
+1. Under **Workspace performance counters** in the configuration workbook, check **Configured counters** to see the counters you've already enabled to send to the Log Analytics workspace. Check **Missing counters** to make sure you've enabled all required counters.
+2. If you have missing counters, select **Configure performance counters**.
+3. Select **Apply Config**.
+4. Refresh the configuration workbook.
+5. Make sure all the required counters are enabled by checking the **Missing counters** list.
+
+#### Configure Windows Event Logs
+
+You'll also need to enable specific Windows Event Logs to collect errors, warnings, and information from the session hosts and send them to the Log Analytics workspace.
+
+If you've already enabled Windows Event Logs and want to remove them, follow the instructions in [Configuring Windows Event Logs](../azure-monitor/agents/data-sources-windows-events.md#configure-windows-event-logs). You can add and remove Windows Event Logs in the same location.
+
+To set up Windows Event Logs using the configuration workbook:
+
+1. Under **Windows Event Logs configuration**, check **Configured Event Logs** to see the Event Logs you've already enabled to send to the Log Analytics workspace. Check **Missing Event Logs** to make sure you've enabled all Windows Event Logs.
+2. If you have missing Windows Event Logs, select **Configure Events**.
+3. Select **Deploy**.
+4. Refresh the configuration workbook.
+5. Make sure all the required Windows Event Logs are enabled by checking the **Missing Event Logs** list.
+
+>[!NOTE]
+>If automatic event deployment fails, select **Open agent configuration** in the configuration workbook to manually add any missing Windows Event Logs.
+
+## Optional: configure alerts
+
+Azure Virtual Desktop Insights allows you to monitor Azure Monitor alerts happening within your selected subscription in the context of your Azure Virtual Desktop data. Azure Monitor alerts are an optional feature on your Azure subscriptions, and you need to set them up separately from Azure Virtual Desktop Insights. You can use the Azure Monitor alerts framework to set custom alerts on Azure Virtual Desktop events, diagnostics, and resources. To learn more about Azure Monitor alerts, see [Azure Monitor Log Alerts](../azure-monitor/alerts/alerts-log.md).
+
+## Diagnostic and usage data
+
+Microsoft automatically collects usage and performance data through your use of the Azure Virtual Desktop Insights service. Microsoft uses this data to improve the quality, security, and integrity of the service.
+
+To provide accurate and efficient troubleshooting capabilities, the collected data includes the portal session ID, Azure Active Directory user ID, and the name of the portal tab where the event occurred. Microsoft doesn't collect names, addresses, or other contact information.
+
+For more information about data collection and usage, see the [Microsoft Online Services Privacy Statement](https://privacy.microsoft.com/privacystatement).
+
+>[!NOTE]
+>To learn about viewing or deleting your personal data collected by the service, see [Azure Data Subject Requests for the GDPR](/microsoft-365/compliance/gdpr-dsr-azure). For more information about GDPR, see [the GDPR section of the Service Trust portal](https://servicetrust.microsoft.com/ViewPage/GDPRGetStarted).
+
+## Next steps
+
+Now that youΓÇÖve configured Azure Virtual Desktop Insights for your Azure Virtual Desktop environment, here are some resources that might help you start monitoring your environment:
+
+- Check out our [glossary](insights-glossary.md) to learn more about terms and concepts related to Azure Virtual Desktop Insights.
+- To estimate, measure, and manage your data storage costs, see [Estimate Azure Virtual Desktop Insights costs](insights-costs.md).
+- If you encounter a problem, check out our [troubleshooting guide](troubleshoot-insights.md) for help and known issues.
+- To see what's new in each version update, see [What's new in Azure Virtual Desktop Insights](whats-new-insights.md).
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
When you set up your NSG, you must configure it to allow both the URLs in the [r
To validate your Private Link for Azure Virtual Desktop and make sure it's working:
-1. Check to see if your session hosts are registered and functional on the VNet. You can check their health status with [Azure Monitor](azure-monitor.md).
+1. Check to see if your session hosts are registered and functional on the VNet. You can check their health status with [Azure Monitor](insights.md).
1. Next, test your feed connections to make sure they perform as expected. Use the client and make sure you can add and refresh workspaces.
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/overview.md
Once you've set up Azure Virtual Desktop, you have lots of options to customize
- [How to use Azure Active Directory](../../active-directory/fundamentals/active-directory-access-create-new-tenant.md) - [Using Windows 10 virtual machines with Intune](/mem/intune/fundamentals/windows-10-virtual-machines) - [How to deploy an app using MSIX app attach](msix-app-attach.md)-- [Use Azure Monitor for Azure Virtual Desktop to monitor your deployment](../azure-monitor.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
+- [Use Azure Virtual Desktop Insights to monitor your deployment](../insights.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
- [Set up a business continuity and disaster recovery plan](../disaster-recovery.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) - [Scale session hosts using Azure Automation](../set-up-scaling-script.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) - [Set up Universal Print](/universal-print/fundamentals/universal-print-getting-started)
Read the following articles to understand concepts essential to creating and man
- [Understanding licensing and per-user access pricing](licensing.md) - [Security guidelines for cross-organizational apps](security.md) - [Azure Virtual Desktop security best practices](../security-guide.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)-- [Azure Monitor for Azure Virtual Desktop glossary](../azure-monitor-glossary.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
+- [Azure Virtual Desktop Insights glossary](../azure-monitor-glossary.md?toc=/azure/virtual-desktop/remote-app-streaming/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
- [Azure Virtual Desktop for the enterprise](/azure/architecture/example-scenario/wvd/windows-virtual-desktop) - [Estimate total deployment costs](total-costs.md) - [Estimate per-user app streaming costs](streaming-costs.md)
virtual-desktop Streaming Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/streaming-costs.md
Per-user access pricing for Azure Virtual Desktop lets you grant access to apps
Before you can estimate per-user access costs for an existing deployment, youΓÇÖll need the following things: - An Azure Virtual Desktop deployment that's had active users within the last month.-- [Azure Monitor for your Azure Virtual Desktop deployment](../azure-monitor.md)
+- [Azure Virtual Desktop Insights for your Azure Virtual Desktop deployment](../insights.md)
## Measure monthly user activity in a host pool
-In order to estimate total costs for running a host pool, you'll first need to know the number of active users over the past month. You can use Azure Monitor for Azure Virtual Desktop to find this number.
+In order to estimate total costs for running a host pool, you'll first need to know the number of active users over the past month. You can use Azure Virtual Desktop Insights to find this number.
-To check monthly active users on Azure Monitor:
+To check monthly active users on Azure Virtual Desktop Insights:
-1. Open the Azure portal, then search for and select **Azure Virtual Desktop**. After that, select **Insights** to open Azure Monitor for Azure Virtual Desktop.
+1. Open the Azure portal, then search for and select **Azure Virtual Desktop**. After that, select **Insights** to open Azure Virtual Desktop Insights.
2. Select the name of the subscription or host pool that you want to measure.
virtual-desktop Total Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/total-costs.md
In Azure Virtual Desktop, session host VMs use the following three Azure service
- Storage for managed disks (including OS storage per VM and any data disks for personal desktops) - Bandwidth (networking)
-These charges can be viewed at the Azure Resource Group level where the host pool-specific resources including session host VMs are assigned. If one or more host pools are also configured to use the paid Log Analytics service to send VM data to the optional Azure Virtual Desktop Insights feature, then the bill will also charge you for the Log Analytics for the corresponding Azure Resource Groups. For more information, see [Estimate Azure Virtual Desktop monitoring costs](../azure-monitor-costs.md).
+These charges can be viewed at the Azure Resource Group level where the host pool-specific resources including session host VMs are assigned. If one or more host pools are also configured to use the paid Log Analytics service to send VM data to the optional Azure Virtual Desktop Insights feature, then the bill will also charge you for the Log Analytics for the corresponding Azure Resource Groups. For more information, see [Estimate Azure Virtual Desktop monitoring costs](../insights-costs.md).
Of the three primary VM session host usage costs that are listed at the beginning of this section, compute usually costs the most. To mitigate compute costs and optimize resource demand with availability, many customers choose to [scale session hosts automatically](../set-up-scaling-script.md).
You can use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/c
>You can add extra Azure Pricing Calculator modules to estimate the cost impact of other components of your deployment, including but not limited to: > >- Domain controllers
->- Other storage-dependent features, such as custom OS images, MSIX app attach, and Azure Monitor
+>- Other storage-dependent features, such as custom OS images, MSIX app attach, and Azure Virtual Desktop Insights
### Predicting user access costs
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
You need to make sure you have the names of the resource group and host pool you
## Troubleshooting
-If you run into any issues with Start VM On Connect, we recommend you use the Azure Virtual Desktop [diagnostics feature](diagnostics-log-analytics.md) to check for problems. If you receive an error message, make sure to pay close attention to the message content and make a note of the error name for reference. You can also use [Azure Monitor for Azure Virtual Desktop](azure-monitor.md) to get suggestions for how to resolve issues.
+If you run into any issues with Start VM On Connect, we recommend you use the Azure Virtual Desktop [diagnostics feature](diagnostics-log-analytics.md) to check for problems. If you receive an error message, make sure to pay close attention to the message content and make a note of the error name for reference. You can also use [Azure Virtual Desktop Insights](insights.md) to get suggestions for how to resolve issues.
If the session host VM doesn't turn on, you'll need to check the health of the VM you tried to turn on as a first step.
virtual-desktop Troubleshoot Connection Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-connection-quality.md
The [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.co
If your **Connection Network Data Logs** aren't going to Azure Log Analytics every two minutes, you'll need to check the following things: - Make sure you've [configured the diagnostic settings correctly](diagnostics-log-analytics.md).-- Make sure you've configured the VM and [monitoring agents](azure-monitor.md) correctly.
+- Make sure you've configured the VM correctly.
- Make sure you're actively using the session. Sessions that aren't actively used won't send data to Azure Log Analytics as frequently. ## Next steps
virtual-desktop Troubleshoot Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-insights.md
+
+ Title: Troubleshoot Monitor Azure Virtual Desktop - Azure
+description: How to troubleshoot issues with Azure Virtual Desktop Insights.
++ Last updated : 11/14/2022+++
+# Troubleshoot Azure Virtual Desktop Insights
+
+This article presents known issues and solutions for common problems in Azure Virtual Desktop Insights.
+
+>[!IMPORTANT]
+>[The Log Analytics Agent is currently being deprecated](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). While Azure Virtual Desktop Insights currently uses the Log Analytics Agent for Azure Virtual Desktop support, you'll eventually need to migrate to Azure Virtual Desktop Insights by August 31, 2024. We'll provide instructions for how to migrate when we release the update that allows Azure Virtual Desktop Insights to support the Azure Virtual Desktop Insights Agent. Until then, continue to use the Log Analytics Agent.
+
+## Issues with configuration and setup
+
+If the configuration workbook isn't working properly to automate setup, you can use these resources to set up your environment manually:
+
+- To manually enable diagnostics or access the Log Analytics workspace, see [Send Azure Virtual Desktop diagnostics to Log Analytics](diagnostics-log-analytics.md).
+- To install the Log Analytics extension on a session host manually, see [Log Analytics virtual machine extension for Windows](../virtual-machines/extensions/oms-windows.md).
+- To set up a new Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+- To add, remove, or edit performance counters, see [Configuring performance counters](../azure-monitor/agents/data-sources-performance-counters.md).
+- To configure Windows Event Logs for a Log Analytics workspace, see [Collect Windows event log data sources with Log Analytics agent](../azure-monitor/agents/data-sources-windows-events.md).
+
+## My data isn't displaying properly
+
+If your data isn't displaying properly, check the following common solutions:
+
+- First, make sure you've set up correctly with the configuration workbook as described in [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md). If you're missing any counters or events, the data associated with them won't appear in the Azure portal.
+- Check your access permissions & contact the resource owners to request missing permissions; anyone monitoring Azure Virtual Desktop requires the following permissions:
+ - Read-access to the Azure resource groups that hold your Azure Virtual Desktop resources
+ - Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts
+ - Read-access to whichever Log Analytics workspaces you're using
+- You may need to open outgoing ports in your server's firewall to allow Azure Monitor and Log Analytics to send data to the portal. To learn how to do this, see the following articles:
+ - [Azure Monitor Outgoing ports](../azure-monitor/app/ip-addresses.md)
+ - [Log Analytics Firewall Requirements](../azure-monitor/agents/log-analytics-agent.md#firewall-requirements).
+- Not seeing data from recent activity? You may want to wait for 15 minutes and refresh the feed. Azure Monitor has a 15-minute latency period for populating log data. To learn more, see [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md).
+
+If you're not missing any information but your data still isn't displaying properly, there may be an issue in the query or the data sources. Review [known issues and limitations](#known-issues-and-limitations).
+
+## I want to customize Azure Virtual Desktop Insights
+
+Azure Virtual Desktop Insights uses Azure Monitor Workbooks. Workbooks lets you save a copy of the Azure Virtual Desktop workbook template and make your own customizations.
+
+By design, custom Workbook templates will not automatically adopt updates from the products group. For more information, see [Troubleshooting workbook-based insights](../azure-monitor/insights/troubleshoot-workbooks.md) and the [Workbooks overview](../azure-monitor/visualize/workbooks-overview.md).
+
+## I can't interpret the data
+
+Learn more about data terms at the [Azure Virtual Desktop Insights glossary](insights-glossary.md).
+
+## The data I need isn't available
+
+If you want to monitor more Performance counters or Windows Event Logs, you can enable them to send diagnostics info to your Log Analytics workspace and monitor them in **Host Diagnostics: Host browser**.
+
+- To add performance counters, see [Configuring performance counters](../azure-monitor/agents/data-sources-performance-counters.md#configuring-performance-counters)
+- To add Windows Events, see [Configuring Windows Event Logs](../azure-monitor/agents/data-sources-windows-events.md#configure-windows-event-logs)
+
+Can't find a data point to help diagnose an issue? Send us feedback!
+
+- To learn how to leave feedback, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md).
+- You can also leave feedback for Azure Virtual Desktop at the [Azure Virtual Desktop feedback hub](https://support.microsoft.com/help/4021566/windows-10-send-feedback-to-microsoft-with-feedback-hub-app).
+
+## Known issues and limitations
+
+The following are issues and limitations we're aware of and working to fix:
+
+- You can only monitor one host pool at a time.
+- To save favorite settings, you have to save a custom template of the workbook. Custom templates won't automatically adopt updates from the product group.
+- The configuration workbook will sometimes show "query failed" errors when loading your selections. Refresh the query, reenter your selection if needed, and the error should resolve itself.
+- Some error messages aren't phrased in a user-friendly way, and not all error messages are described in documentation.
+- The total sessions performance counter can over-count sessions by a small number and your total sessions may appear to go above your Max Sessions limit.
+- Available sessions count doesn't reflect scaling policies on the host pool.
+- Do you see contradicting or unexpected connection times? While rare, a connection's completion event can go missing and can impact some visuals and metrics.
+- Time to connect includes the time it takes users to enter their credentials; this correlates to the experience but in some cases can show false peaks.
+
+
+## Next steps
+
+- To get started, see [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md).
+- To estimate, measure, and manage your data storage costs, see [Estimate Azure Monitor costs](insights-costs.md).
+- Check out our [glossary](insights-glossary.md) to learn more about terms and concepts related to Azure Virtual Desktop Insights.
virtual-desktop Troubleshoot Set Up Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-overview.md
This article provides an overview of the issues you may encounter when setting u
## Troubleshoot deployment and connection issues
-[Azure Monitor for Azure Virtual Desktop](azure-monitor.md) is a dashboard built on Azure Monitor workbooks that can quickly troubleshoot and identify issues in your Azure Virtual Desktop environment for you. If you prefer working with Kusto queries, we recommend using the built-in diagnostic feature, [Log Analytics](diagnostics-log-analytics.md), instead.
+[Azure Virtual Desktop Insights](insights.md) is a dashboard built on Azure Monitor workbooks that can quickly troubleshoot and identify issues in your Azure Virtual Desktop environment for you. If you prefer working with Kusto queries, we recommend using the built-in diagnostic feature, [Log Analytics](diagnostics-log-analytics.md), instead.
## Report issues
virtual-desktop Whats New Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-insights.md
+
+ Title: What's new in Azure Virtual Desktop Insights?
+description: New features and product updates in Azure Virtual Desktop Insights.
++ Last updated : 08/16/2022++++
+# What's new in Azure Virtual Desktop Insights?
+
+This article describes the changes we make to each new version of Azure Virtual Desktop Insights.
+
+If you're not sure which version of Azure Virtual Desktop Insights you're currently using, you can find it in the bottom-right corner of your Insights page or configuration workbook. To access your workbook, go to [https://aka.ms/azmonwvdi](https://aka.ms/azmonwvdi).
+
+## How to read version numbers
+
+There are three numbers in each version of Azure Virtual Desktop Insights. Here's what each number means:
+
+- The first number is the major version, and is usually used for major releases.
+
+- The second number is the minor version. Minor versions are for backwards-compatible changes such as new features and deprecation notices.
+
+- The third number is the patch version, which is used for small changes that fix incorrect behavior or bugs.
+
+For example, a release with a version number of 1.2.31 is on the first major release, the second minor release, and patch number 31.
+
+When one of the numbers is increased, all numbers after it must change, too. One release has one version number. However, not all version numbers track releases. Patch numbers can be somewhat arbitrary, for example.
+
+## Version 1.2.2
+
+This update was released in July 2022 and has the following changes:
+
+- Updated checkpoint queries for LaunchExecutable.
+
+## Version 1.2.1
+
+This update was released in June 2022 and has the following changes:
+
+- Updated templates for Configuration Workbook to be available via the gallery rather than external GitHub.
+
+## Version 1.2.0
+
+This update was released in May 2022 and has the following changes:
+
+- Updated language for connection performance to "time to be productive" for clarity.
+
+- Improved and expanded **Connection Details** flyout panel with additional information on connection history for selected users.
+
+- Added a fix for duplication of some data.
+
+## Version 1.1.10
+
+This update was released in February 2022 and has the following changes:
+
+- We added support for [category groups](../azure-monitor/essentials/diagnostic-settings.md#resource-logs) for resource logs.
+
+## Version 1.1.8
+
+This update was released in November 2021 and has the following changes:
+
+- We added a dynamic check for host pool and workspaces Log Analytics tables to show instances where diagnostics may not be configured.
+- Updated the source table for session history and calculations for users per core.
+
+## Version 1.1.7
+
+This update was released in November 2021 and has the following changes:
+
+- We increased the session host limit to 1000 for the configuration workbook to allow for larger deployments.
+
+## Version 1.1.6
+
+This update was released in October 2021 and has the following changes:
+
+- We updated contents to reflect change from *Windows Virtual Desktop* to *Azure Virtual Desktop*.
+
+## Version 1.1.4
+
+This update was released in October 2021 and has the following changes:
+
+- We updated data usage reporting in the configuration workbook to include the agent health table.
+
+## Version 1.1.3
+
+This update was released in September 2021 and has the following changes:
+
+- We updated filtering behavior to make use of resource IDs.
+
+## Version 1.1.2
+
+This update was released in August 2021 and has the following changes:
+
+- We updated some formatting in the workbooks.
+
+## Version 1.1.1
+
+This update was released in July 2021 and has the following changes:
+
+- We added the Workbooks gallery for quick access to Azure Virtual Desktop related Azure workbooks.
+
+## Version 1.1.0
+
+This update was released July 2021 and has the following changes:
+
+- We added a **Data Generated** tab to the configuration workbook for detailed data on storage space usage for Azure Virtual Desktop Insights to allow more insight into Log Analytics usage.
+
+## Version 1.0.4
+
+This update was released in June 2021 and has the following changes:
+
+- We made some changes to formatting and layout for better use of whitespace.
+- We changed the sort order for **User Input Delay** details in **Host Performance** to descending.
+
+## Version 1.0.3
+
+This update was released in May 2021 and has the following changes:
+
+- We updated formatting to prevent truncation of text.
+
+## Version 1.0.2
+
+This update was released in May 2021 and has the following changes:
+
+- We resolved an issue with user per core calculation in the **Utilization** tab.
+
+## Version 1.0.1
+
+This update was released in April 2021 and has the following changes:
+
+- We made a formatting update for columns containing sparklines.
+
+## Version 1.0.0
+
+This update was released in March 2021 and has the following changes:
+
+- We introduced a new visual indicator for high-impact errors and warnings from the Azure Virtual Desktop agent event log on the host diagnostics page.
+
+- We removed five expensive process performance counters from the default configuration. For more information, see our blog post at [Updated guidance on Azure Virtual Desktop Insights](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/updated-guidance-on-azure-monitor-for-wvd/m-p/2236173).
+
+- The setup process for Windows Event Log for the configuration workbook is now automated.
+
+- The configuration workbook now supports automated deployment of recommended Windows Event Logs.
+
+- The configuration workbook can now install the Log Analytics agent and setting-preferred workspace for session hosts outside of the resource group's region.
+
+- The configuration workbook now has a tabbed layout for the setup process.
+
+- We introduced versioning with this update.
+
+## Next steps
+
+For the general What's New page, see [What's New in Azure Virtual Desktop](whats-new.md).
+
+To learn more about Azure Virtual Desktop Insights, see [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md).
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
The Azure Marketplace now has Generation 2 images for Windows 10 Enterprise and
Based on customer feedback, we've released a new version of the Windows 10 Enterprise multi-session image that has an unconfigured version of FSLogix already installed. We hope this makes your Azure Virtual Desktop deployment easier.
-### Azure Monitor for Azure Virtual Desktop is now in General Availability
+### Azure Virtual Desktop Insights is now in General Availability
-Azure Monitor for Azure Virtual Desktop is now generally available to the public. This feature is an automated service that monitors your deployments and lets you view events, health, and troubleshooting suggestions in a single place. For more information, see [our documentation](azure-monitor.md) or check out [our TechCommunity post](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/azure-monitor-for-windows-virtual-desktop-is-generally-available/m-p/2242861).
+Azure Virtual Desktop Insights is now generally available to the public. This feature is an automated service that monitors your deployments and lets you view events, health, and troubleshooting suggestions in a single place. For more information, see [our documentation](insights.md) or check out [our TechCommunity post](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/azure-monitor-for-windows-virtual-desktop-is-generally-available/m-p/2242861).
### March 2021 updates for Teams on Azure Virtual Desktop
We've recently published [an article about the Azure security baseline](security
Here's what changed in December 2020:
-### Azure Monitor for Azure Virtual Desktop
+### Azure Virtual Desktop Insights
-The public preview for Azure Monitor for Azure Virtual Desktop is now available. This new feature includes a robust dashboard built on top of Azure Monitor Workbooks to help IT professionals understand their Azure Virtual Desktop environments. Check out [the announcement on our blog](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/azure-monitor-for-windows-virtual-desktop-public-preview/m-p/1946587) for more details.
+The public preview for Azure Virtual Desktop Insights is now available. This new feature includes a robust dashboard built on top of Azure Monitor Workbooks to help IT professionals understand their Azure Virtual Desktop environments. Check out [the announcement on our blog](https://techcommunity.microsoft.com/t5/windows-virtual-desktop/azure-monitor-for-windows-virtual-desktop-public-preview/m-p/1946587) for more details.
### Azure Resource Manager template change
virtual-machines External Ntpsource Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/external-ntpsource-configuration.md
**Applies to:** :heavy_check_mark: Windows Virtual Machines
-Use this guide to learn how to setup time synchronization for your Azure Windows Virtual Machines that belong to an Active Directory Domain.
+Use this guide to learn how to set up time synchronization for your Azure Windows Virtual Machines that belong to an Active Directory Domain.
## Time sync hierarchy in Active Directory Domain Services
All other Domain Controllers would then sync time against the PDC, and all other
If you have an Active Directory domain running on virtual machines hosted in Azure, follow these steps to properly set up Time Sync. >[!NOTE]
->This guide focuses on usign the **Group Policy Management** console to perform the configuration. You can achieve the same results by using the Command Prompt, PowerShell, or by manually modifying the Registry; however those methods are not in scope in this article.
+>This guide focuses on using the **Group Policy Management** console to perform the configuration. You can achieve the same results by using the Command Prompt, PowerShell, or by manually modifying the Registry; however, those methods are not in scope in this article.
## GPO to allow the PDC to synchronize with an External NTP Source
To check current time source in your **PDC**, from an elevated command prompt ru
8. Double click the *Configure Windows NTP Client* policy and set it to *Enabled*, configure the parameter *NTPServer* to point to an IP address or FQDN of a time server followed by `,0x9` for example: `131.107.13.100,0x9` and configure *Type* to **NTP**. For all the other parameters you can use the default values, or use custom ones according to your corporate needs. 9. Click the *Next Setting* button, set the *Enable Windows NTP Client* policy to *Enabled* and click *OK* 10. In the *Scope* tab of the newly created GPO navigate to **Security Filtering** and highlight the *Authenticated Users* group -> Click the *Remove* button -> *OK* -> *OK*
-11. Create a WMI Filter to dinamycally get the Domain Controller that holds the PDC role:
+11. Create a WMI Filter to dynamically get the Domain Controller that holds the PDC role:
- In the *Group Policy Management* console, navigate to *WMI Filters*, right-click on it and select *New*. - In the *New WMI Filter* window, give a name to the new filter, for example, *Get PDC Emulator* -> Fill out the *Description* field (optional) -> Click the *Add* button. - In the *WMI Query* window leave the *Namespace* as is, in the *Query* text box paste the following string `Select * from Win32_ComputerSystem where DomainRole = 5`, then click the *OK* button.
To check current time source in your **PDC**, from an elevated command prompt ru
14. Link the GPO to the **Domain Controllers** Organizational Unit. >[!NOTE]
->It can take up to 15 minutes for these changes to reflect in the system.
+>It can take up to 15 minutes for these changes to be reflected by the system.
From an elevated command prompt rerun *w32tm /query /source* and compare the output to the one you noted at the beginning of the configuration. Now it will be set to the NTP Server you chose.
Below are links to more details about the time sync:
- [Windows Server 2016 Improvements ](/windows-server/networking/windows-time-service/windows-server-2016-improvements) - [Accurate Time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time)-- [Support boundary to configure the Windows Time service for high-accuracy environments](/windows-server/networking/windows-time-service/support-boundary)
+- [Support boundary to configure the Windows Time service for high-accuracy environments](/windows-server/networking/windows-time-service/support-boundary)
virtual-machines Automation Configure Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-webapp.md
Title: Configure a Deployer Web Application for SAP on Azure Deployment Automation Framework description: Configure a web app as a part of the control plane to help creating and deploying SAP workload zones and systems on Azure.--++ Last updated 10/19/2022
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
| [**SAP S/4HANA 2021 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | July 19 2022 | This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Management (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | April 26 2022 | This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Management (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP BW/4HANA 2021 including BW/4HANA Content 2.0 SP08 - Dev Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/06725b24-b024-4757-860d-ac2db7b49577) | May 11 2022 | This solution offers you an insight of SAP BW/4HANA. SAP BW/4HANA is the next generation Data Warehouse optimized for HANA. Beside the basic BW/4HANA options, the solution offers a bunch of HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. As the system is pre-configured, you can start directly implementing your scenarios. | [Create Appliance](https://cal.sap.com/registration?sguid=06725b24-b024-4757-860d-ac2db7b49577&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/7bd4548f-a95b-4ee9-910a-08c74b4f6c37) | June 21 2021 | The SAP ABAP Platform on SAP HANA gives you access to SAP ABAP Platform 1909 Developer Edition on SAP HANA. This solution is pre-configured with many other elements ΓÇô including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and pre-configured frontend / backend connections, etc. It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=7bd4548f-a95b-4ee9-910a-08c74b4f6c37&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP ERP 6.0 EhP 6 for Data Migration to SAP S/4HANA**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/56825489-df3a-4b6d-999c-329a63ef5e8a) | October 24 2022 | Update password of DDIC 100, SAP* 000. This system can be used as source system for the "direct transfer" data migration scenarios of the SAP S/4HANA *Fully-Activated Appliance*. It might also be useful as an "open playground" for SP ERP 6.0 EhP6 scenarios, however, the contained business processes and data structures aren't documented explicitly. | [Create Appliance](https://cal.sap.com/registration?sguid=56825489-df3a-4b6d-999c-329a63ef5e8a&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| **SAP ERP 6.0 EhP 6 for Data Migration to SAP S/4HANA** | October 24 2022 |update password of DDIC 100, SAP* 000 .This system can be used as source system for the "direct transfer" data migration scenarios of the SAP S/4HANA Fully-Activated Appliance. It might also be useful as an "open playground" for SP ERP 6.0 EhP6 scenarios, however, the contained business processes and data structures are not documented explicitly. | [Create Appliance](https://cal.sap.com/registration?sguid=56825489-df3a-4b6d-999c-329a63ef5e8a&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| **Enterprise Management Layer for SAP S/4HANA 2021 FPS02** | November 09 2022 |The enterprise management layer for SAP S/4HANA 2021 offers a ready-to-run, pre-configured, localized core template based on pre-activated SAP Best Practices on-premise country versions covering 43 countries. The CAL solution can be used to get familiar with this offering. | [Create Appliance](https://cal.sap.com/registration?sguid=431a38f2-2582-42df-830c-1e2ba1031a0d&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 20 2020 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
virtual-machines Disaster Recovery Overview Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/disaster-recovery-overview-guide.md
+
+ Title: Disaster Recovery overview and infrastructure guidelines for SAP workload
+description: Disaster Recovery planning and consideration for SAP workload
+++++++ Last updated : 11/21/2022++
+# Disaster recovery overview and infrastructure guidelines for SAP workload
+
+Many organizations running critical business applications on Azure set up both High Availability (HA) and Disaster Recovery (DR) strategy. The purpose of high availability is to increase the SLA of business systems by eliminating single points of failure in the underlying system infrastructure. High Availability technologies reduce the effect of unplanned infrastructure failure and help with planned maintenance. Disaster Recovery is defined as policies, tools and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a geographically widespread natural or human-induced disaster.
+
+To achieve [high availability for SAP workload on Azure](sap-high-availability-guide-start.md), virtual machines are typically deployed in an [availability set](planning-guide.md#18810088-f9be-4c97-958a-27996255c665) or in [availability zones](planning-guide.md#availability-zones) to protect applications from infrastructure maintenance or failure within region. But the deployment doesnΓÇÖt protect applications from widespread disaster within region. So to protect applications from regional disaster, disaster recovery strategy for the applications should be in place. Disaster recovery is a documented and structured approach that is designed to assist an organization in executing the recovery processes in response to a disaster, and to protect or minimize IT services disruption and promote recovery.
+
+This document provides details on protecting SAP workloads from large scale catastrophe by implementing structured DR approach. The details in this document are presented at an abstract level, based on different Azure services and SAP components. Exact DR strategy and the order of recovery for your SAP workload must be tested, documented and fine tuned regularly. Also, the document focuses on the Azure-to-Azure DR strategy for SAP workload.
+
+## General disaster recovery plan considerations
+
+SAP workload on Azure runs on virtual machines in combination with different Azure services to deploy different layers (central services, application servers, database server) of a typical SAP NetWeaver application. In general, a DR strategy should be planned for the entire IT landscape running on Azure, which means to take into account non-SAP applications as well. The business solution running in SAP systems may not run as whole, if the dependent services or assets aren't recovered on the DR site. So you need to come up with a well-defined comprehensive DR plan considering all the components and systems.
+
+For DR on Azure, organizations should consider different scenarios that may trigger failover.
+
+- SAP application or business process availability.
+- Azure services (like virtual machines, storage, load balancer etc.) unavailability within a region due to widespread failure.
+- Potential threats and vulnerabilities to the application (for example, Application layer DDoS attack)
+- Business compliance required operational tasks to test DR strategy (for example, DR failure exercise to be performed every year as per compliance).
+
+To achieve the recovery goal for different scenarios, organization must outline Recovery Time Objective (RTO) and Recovery Point Objective (RPO) for their workload based on the business requirements. RTO describes the amount of time application can be down, typically measured in hours, minutes or seconds. Whereas RPO describes the amount of transactional data that is acceptable by business to lose in order for normal operations to resume. Identifying RTO and RPO of your business is crucial, as it would help you design your DR strategy optimally. The components (compute, storage, database etc.) involved in SAP workload are replicated to the DR region using different techniques (Azure native services, native DB replication technology, custom scripts). Each technique provides different RPO, which must be accounted for when designing a DR strategy. On Azure, you can use some of the Azure native services like Azure Site Recovery, Azure Backup that can help you to meet RTO and RPO of your SAP workloads. Refer to SLA of [Azure Site Recovery](https://azure.microsoft.com/support/legal/sla/site-recovery/v1_2/) and [Azure Backup](https://azure.microsoft.com/support/legal/sla/backup/v1_0/) to optimally align with your RTO and RPO.
+
+## Design consideration for disaster recovery on Azure
+
+There are different elements to consider when designing a disaster recovery solution on Azure. The principles and concepts that are considered to design on-premises disaster recovery solutions apply to Azure as well. But in Azure, region selection is a key part in design strategy for disaster recovery. So, keep the following points in mind when choosing DR region on Azure.
+
+- Business or regulatory compliance requirements may specify a distance requirement between a primary and disaster recovery site. A distance requirement helps to provide availability if a natural disaster occurs in a wider geography. In such case, an organization can choose another Azure region as their disaster recovery site. Azure regions are often separated by a large distance that might be hundreds or even thousands of kilometers like in the United States. Because of the distance, the network roundtrip latency will be higher, which may result into higher RPO.
+
+- Customers who want to mimic their on-premises metro DR strategy on Azure can use [availability zones for disaster recovery](../../../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md). But zone-to-zone DR strategy may fall short of resilience requirement if thereΓÇÖs geographically widespread natural disaster.
+
+- On Azure, each region is paired with another region within the same geography (except for Brazil South). This approach allows for platform provided replication of resources across region. The benefit of choosing paired region can be found in [region pairs document](../../../virtual-machines/regions.md#region-pairs). When an organization chooses to use Azure paired regions several additional points for an SAP workload needs to be considered:
+
+ - Not all Azure services offer cross-regional replication in paired region.
+ - The Azure services and features in paired Azure regions may not be symmetrical. For example, Azure NetApp Files, VM SKUs like M-Series available in the Primary region might not be available in the paired region. To check if the Azure product or services is available in a region, see [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/).
+
+ - GRS option is available for storage account with standard storage type that replicates data to paired region. But standard storage isn't suitable for SAP DBMS or virtual data disks.
+
+ - The Azure backup service used to back up [supported solutions](../../../backup/backup-overview.md#what-can-i-back-up) can replicate backups only between paired regions. For all your other data, run your own replications with native DBMS features like SQL Server Always On, SAP HANA System Replication, and other services. Use a combination of Azure Site Recovery, rsync or robocopy, and other third-party software for the SAP application layer.
+
+## Reference SAP workload deployment
+
+After identifying a DR region, it's important that the breadth of Azure core services (like network, compute, storage) you've configured in primary region is available and can be configured in DR region. Organization must develop a DR deployment pattern for SAP workload. The deployment pattern varies and must align with the organization's needs.
+
+- Deploy production SAP workload into your primary region and non-production workload into disaster recovery region.
+- Deploy all SAP workload (production and non-production) into your primary region. Disaster recovery region is only used if there's a failover.
+
+The following reference architecture shows typical SAP NetWeaver system running on Azure along with high availability in primary region. The secondary site shown down below is the disaster recovery site where the SAP systems will be restored after a disaster event. Both primary and disaster recovery regions are part of the same subscription. To achieve DR for SAP workload, you need to identify recovery strategy for each SAP layer along with the different Azure services that the application uses.
+
+Organizations should plan and design a DR strategy for their entire IT landscape. Usually SAP systems running in production environment are integrated with different services and interfaces like Active directory, DNS, third-party application, and so on. So you must include the non-SAP systems and other services in your disaster recovery planning as well. This document focuses on the recovery planning for SAP applications. But you can expand the size and scope of the DR planning for dependent components to fit your requirements.
+
+[![Disaster Recovery reference architecture for SAP workload](media/disaster-recovery/disaster-recovery-reference-architecture.png)](media/disaster-recovery/disaster-recovery-reference-architecture.png#lighbox)
+
+## Infrastructure components of DR solution for SAP workload
+
+An SAP workload running on Azure uses different infrastructure components to run a business solution. To plan DR for such solution, it's essential that all infrastructure components configured in the primary region are available, and can be configured in the DR region as well. Following infrastructure components should be factored in when designing DR solution for SAP workload on Azure.
+
+- Network
+- Compute
+- Storage
+
+### Network
+
+- [ExpressRoute](../../../expressroute/expressroute-introduction.md) extends your on-premises network into the Microsoft cloud over a private connection with the help of a connectivity provider. On designing disaster recovery architecture, one must account for building a robust backend network connectivity using geo-redundant ExpressRoute circuit. It's advised setting up at least one ExpressRoute circuit from on-premises to the primary region. And the other(s) should connect to the disaster recovery region. Refer to the [Designing of Azure ExpressRoute for disaster recovery](../../../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md) article, which describe different scenarios to design disaster recovery for ExpressRoute.
+
+ >[!Note]
+ > Consider setting up a site-to-site (S2S) VPN as a backup of Azure ExpressRoute. For more information, see [Using S2S VPN as a backup for Azure ExpressRoute Private Peering](../../../expressroute/use-s2s-vpn-as-backup-for-expressroute-privatepeering.md).
+
+- Virtual network and subnets span all availability zones in a region. For DR across two regions, you need to configure separate virtual networks and subnets on the disaster recovery region. Refer to [About networking in Azure VM disaster recovery](../../../site-recovery/azure-to-azure-about-networking.md) to learn more on the networking setup on DR region.
+
+- Azure [Standard Load Balancer](../../../load-balancer/load-balancer-overview.md) provides networking elements for the high-availability design of your SAP systems. For clustered systems, Standard Load Balancer provides the virtual IP address for the cluster service, like ASCS/SCS instances and databases running on VMs. To run highly available SAP system on the DR site, a separate load balancer must be created and the cluster configuration should be adjusted accordingly.
+
+- [Azure Application Gateway](../../../application-gateway/overview.md) is a web traffic load-balancer. With its [Web Application Firewall](../../../web-application-firewall/ag/ag-overview.md) functionality, its well suited service to expose web applications to the internet with improved security. Azure Application Gateway can service either public (internet) or private clients, or both, depending on the configuration. After failover, to accept similar incoming HTTP(s) traffic on DR region, a separate Azure Application Gateway must be configured in the DR region.
+
+- As networking components (like virtual network, firewall etc.) are created separately in the DR region, you need to make sure that the SAP workload on DR region is adapted to the networking changes like DNS update, firewall etc.
+
+- Virtual networks in both regions are independent and to establish communication between the two, you need to enable [virtual network peering](../../../virtual-network/virtual-network-peering-overview.md) between the two regions.
+
+### Virtual machines
+
+- On Azure, different components of a single SAP system run on virtual machines with different SKU types. For DR, protection of an application (SAP NetWeaver and non-SAP) running on Azure VMs can be enabled by replicating components using [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) to another Azure region or zone. With Azure Site Recovery, Azure VMs are replicated continuously from primary to disaster recovery site. Depending on the selected Azure DR region, the VM SKU type may not be available on the DR site. You need to make sure that the required VM SKU types are available in the Azure DRregion as well. Check [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/) to see if the required VM family SKU type is available or not.
+
+- For databases running on Azure virtual machines, it's recommended to use native database replication technology to synchronize data to the disaster recovery site. The large VMs on which the databases are running may not be available in all regions. If you're using [availability zones for disaster recovery](../../../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md), you should check that the respective VM SKUs are available in the zone of your disaster recovery site.
+
+ > [!Note]
+ >
+ > It isn't advised using Azure Site Recovery for databases, as it doesnΓÇÖt guarantee DB consistency and has [data churn limitation](../../../site-recovery/azure-to-azure-support-matrix.md#limits-and-data-change-rates).
+
+- With production applications running on the primary region at all time, [reserve instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) are typically used to economize Azure costs. If using reserved instances, you need to sign up for 1-year or a 3-year term commitment that may not be cost effective for DR site. Also setting up Azure Site Recovery doesnΓÇÖt guarantee you the capacity of the required VM SKU during your failover. To make sure that the VM SKU capacity is available, you can consider an option to enable [on-demand capacity reservation](../../../virtual-machines/capacity-reservation-overview.md). It reserves compute capacity in an Azure region or an Azure availability zone for any duration of time without commitment. Azure Site Recovery is [integrated](https://azure.microsoft.com/updates/ondemand-capacity-reservation-with-azure-site-recovery-safeguards-vms-failover/) with on-demand capacity reservation. With this integration, you can use the power of capacity reservation with Azure Site Recovery to reserve compute capacity in the DR site and guarantee your failovers. For more information, read on-demand capacity reservation [limitations and restrictions](../../../virtual-machines/capacity-reservation-overview.md#limitations-and-restrictions).
+
+- An Azure subscription has quotas for VM families (for example, Mv2 family) and other resources. Sometimes organizations want to use different Azure subscription for DR. Each subscription (primary and DR) may have different quotas assigned for each VM family. Make sure that the subscription used for the DR site has enough compute quotas available.
+
+### Storage
+
+- On enabling Azure Site Recovery for a VM to set up DR, the OS and local data disks attached to VMs are replicated to the DR site. During replication, VM disk writes are sent to a cache storage account in the source region. Data is sent from there to the target region, and recovery points are generated from the data. When you fail over a VM during DR, a recovery point is used to restore the VM in the target region. But Azure Site Recovery doesnΓÇÖt support all storages types that are available in Azure. For more information, see [Azure Site Recovery support matrix for storages](../../../site-recovery/azure-to-azure-support-matrix.md#replicated-machinesstorage).
+
+- In addition to Azure managed data disks attached to VMs, different Azure native storage solutions are used to run SAP application on Azure. The DR approach for each Azure storage solution may differ, as not all storage services available in Azure are supported with Azure Site Recovery. Below are the list of storage type that is typically used for SAP workload.
+
+ | Storage type | DR strategy recommendation |
+ | : | :-- |
+ | Managed disk | Azure Site Recovery |
+ | NFS on Azure files (LRS or ZRS) | Custom script to replicate data between two sites (for example, rsync) |
+ | NFS on Azure NetApp Files | Use [Cross-region replication of Azure NetApp Files volumes](../../../azure-netapp-files/cross-region-replication-introduction.md) |
+ | Azure shared disk (LRS or ZRS) | Custom solution to replicate data between two sites |
+ | SMB on Azure files (LRS or ZRS) | Use [AzCopy](../../../storage/common/storage-use-azcopy-files.md) to copy files between two sites |
+ | SMB on Azure NetApp Files | Use [Cross-region replication of Azure NetApp Files volumes](../../../azure-netapp-files/cross-region-replication-introduction.md) |
+
+- For custom built storage solutions like NFS cluster, you need to make sure the appropriate DR strategy is in place.
+
+- Different native Azure storage services (like Azure Files, Azure NetApp Files, Azure Shared Disk) may not be not available in all regions. So to have similar SAP setup on the DR region after failover, ensure the respective storage service is offered in DR site. For more information, check [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/).
+
+- If using [availability zones for disaster recovery](../../../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md), keep in mind the following points:
+
+ - Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. So it may happen that the Azure NetApp Files service isn't available in the chosen availability zone for your DR strategy.
+ - Cross region replication of Azure NetApp File volumes is only available in fixed [region pairs](../../../azure-netapp-files/cross-region-replication-introduction.md#supported-region-pairs), not across zones.
+
+- If you've configured your storage with Active Directory integration, similar setup should be done on the DR site storage account as well.
+
+- Azure shared disks require cluster software like Windows Server Failover Cluster (WSFC) that handles cluster node communication and write locking. So to have DR strategy for Azure shared disk, you need to have shared disk managed by cluster software in DR site as well. You can then use script to copy data from shared disk attached to a cluster in primary region to the shared disk attached to another cluster in DR region.
+
+## Next steps
+
+- [Disaster Recovery Guidelines for SAP workload](disaster-recovery-sap-guide.md)
+- [Azure to Azure disaster recovery architecture using Azure Site Recovery service](../../../site-recovery/azure-to-azure-architecture.md)
virtual-machines Disaster Recovery Sap Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/disaster-recovery-sap-guide.md
+
+ Title: Disaster Recovery recommendation for SAP workload
+description: Recommendation of DR strategy for each layer of SAP workload
+++++++ Last updated : 11/21/2022++
+# Disaster recovery guidelines for SAP application
+
+To configure Disaster Recovery (DR) for SAP workload on Azure, you need to test, fine tune and update the process regularly. Testing disaster recovery helps to identify the sequence of the dependent services that are required before you trigger SAP workload DR failover or start the system on the secondary site. Organizations usually have their SAP systems connected to Active Directory (AD) and Domain Name System (DNS) services to function correctly. When you set up DR for your SAP workload, you ensure AD and DNS services are functioning before your recover SAP and other non-SAP systems, to ensure the application functions correctly. For guidance on protecting Active Directory and DNS, learn [how to protect Active Directory and DNS](../../../site-recovery/site-recovery-active-directory.md). The recommendation for SAP application described in this document is at abstract level, you need to design your DR strategy based on your specific setup and document the end-to-end DR scenario.
+
+## DR recommendation for SAP workloads
+
+Usually in distributed SAP NetWeaver systems; central services, database and shared storage (NFS/SMB) are single point of failures (SPOF). To mitigate the effect of different SPOFs, it's necessary to set up redundancy of these components. The redundancy of these SPOF components in the primary region is achieved by configuring high availability. The high availability setup of the component protects SAP system from local failure or catastrophe. But to protect SAP applications from geographical dispersed disaster, DR strategy should be implemented for all the SAP components.
+
+For SAP systems running on virtual machines, you can use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) to create a disaster recovery plan. Following is the recommended disaster recovery approach for each component of an SAP system. Standalone non-NetWeaver SAP engines such as TREX and non-SAP applications aren't covered in this document.
+
+| Components | Recommendation |
+| - | |
+| SAP Web Dispatcher | Replicate VM using Azure Site Recovery |
+| SAP Central Services | Replicate VM using Azure Site Recovery |
+| SAP Application server | Replicate VM using Azure Site Recovery |
+| SAP Database | Use replication method offered by the database |
+| Shared Storage | Replicate content, using appropriate method per storage type |
+
+### SAP Web Dispatcher
+
+SAP Web Dispatcher component works as a load balancer for SAP traffic among SAP application servers. You have different options to achieve high availability of SAP Web Dispatcher component in the primary region. For more information about this option, see [High Availability of the SAP Web Dispatcher](https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/683d6a1797a34730a6e005d1e8de6f22/489a9a6b48c673e8e10000000a42189b.html) and [SAP Web dispatcher HA setup on Azure](https://blogs.sap.com/2022/04/02/sap-on-azure-sap-web-dispatcher-highly-availability-setup-and-virtual-hostname-ip-configuration-with-azure-load-balancer/).
+
+- Option 1: High availability using cluster solution
+- Option 2: High availability with several parallel web SAP Web Dispatchers.
+
+To achieve DR for highly available SAP Web Dispatcher setup in primary region, you can use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md). In the above reference architecture, parallel web dispatchers (option 2) are running in the primary region and Azure Site Recovery is used to achieve DR. If you have configured SAP Web Dispatcher using option 1 in primary region, you need to make some additional changes after failover to have similar HA setup on the DR region. As the configuration of SAP Web Dispatcher high availability with cluster solution is configured in similar manner to SAP central services. Follow the same guidelines as mentioned for SAP Central Services.
+
+### SAP Central Services
+
+The SAP central services, which contain the enqueue and the message server is one of the SPOFs of your SAP application. In an SAP system, there can be only one such instance, and it can be configured for high availability. Read [High Availability for SAP Central Service](sap-planning-supported-configurations.md#high-availability-for-sap-central-service) to understand the different high availability solution for SAP workload on Azure.
+
+Configuring high availability for SAP Central Services protects resources and processes from local incidents. To achieve DR for SAP Central Services, you can use Azure Site Recovery. Alongside Azure Site Recovery to replicate VMs and local disk, there are additional considerations for your DR strategy. Check the section below for more information, based on the operating system used for SAP central services.
+
+#### [Linux](#tab/linux)
+
+For SAP system, the redundancy of SPOF component in the primary region is achieved by configuring high availability. To achieve similar high availability setup in the disaster recovery region after failover, you need to consider additional points like cluster reconfiguration, SAP shared directories availability, alongside Azure Site Recovery to replicate VMs to DR site. On Linux, the high availability of SAP application can be achieved using pacemaker cluster solution. The diagram below shows the different components involved in configuring high availability for SAP central services with Pacemaker. Each component must be taken into consideration to have similar high availability set up on the DR site. If you have configured SAP Web Dispatcher using pacemaker cluster solution, similar consideration would apply as well.
+
+![SAP system Linux architecture](media/disaster-recovery/disaster-recovery-sap-linux-architecture.png)
+
+##### Internal load balancer
+
+Azure Site Recovery replicates VMs to the DR site, but it doesnΓÇÖt replicate Azure load balancer. You'll need to create a separate internal load balancer on DR site beforehand or after failover. If you create internal load balancer beforehand, create an empty backend pool and add VMs after the failover event.
+
+##### Pacemaker cluster solution
+
+The configurations of a pacemaker cluster reside in local files of VMs, which are replicated to the DR site with Azure Site Recovery. The as-is pacemaker cluster configuration wonΓÇÖt work out-of-the-box on the VMs after failover. Additional cluster reconfiguration is required in order to make the solution work.
+
+Read these blogs to learn about the pacemaker cluster reconfiguration in the DR region, based on the type of your storage and fencing mechanism.
+
+- [SAP ASCS/ERS HA Cluster with SBD device (using iSCSI target server) failover to DR region using Azure Site Recovery](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-ascs-ers-ha-cluster-with-sbd-device-using-iscsi-target/ba-p/3577235).
+- [SAP ASCS HA Cluster (in Linux OS) failover to DR region using Azure Site Recovery](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-ascs-ha-cluster-in-linux-os-failover-to-dr-region-using/ba-p/2120369).
+
+##### SAP shared directories for Linux
+
+The high availability setup of SAP NetWeaver or ABAP platform uses enqueue replication server for achieving application level redundancy for the enqueue service of SAP system with Pacemaker cluster configuration. The high availability setup of SAP central services (ASCS and ERS) uses NFS mounts. So you need to make sure SAP binaries and data in these NFS mounts are replicated to DR site. Azure Site Recovery replicates VMs and local managed disk attached, but it doesn't replicate NFS mounts. Based on the type of NFS storage you've configured for the setup, you need to make sure the data is replicated and available in DR site. The cross regional replication methodology for each storage is presented at abstract level. You need to confirm exact steps to replicate storage and perform testing.
+
+| SAP shared directories | Cross regional replication |
+| - | |
+| NFS on Azure files | Custom (like rsync) |
+| NFS on ANF | Yes ([Cross Region Replication](../../../azure-netapp-files/cross-region-replication-introduction.md)) |
+| NFS cluster | Custom |
+
+>[!Tip]
+> We recommend deploying one of the Azure first-party NFS
+
+##### Fencing Mechanism
+
+Irrespective of the operating system (SLES or RHEL) and its version, pacemaker requires a valid fencing mechanism in order for the entire solution to work properly. Based on the type of fencing mechanism you had setup in your primary region, you need to make sure the same fencing mechanism is set up on the DR site after failover.
+
+| Fencing Mechanism | Cross region DR recommendation |
+| -- | |
+| SBD using iSCSI target server | Replicate iSCSI target server using Azure Site Recovery.</br> On DR VMs, discover iSCSI disk again. |
+| Azure fence agent | Enable Managed System Identities (MSI) on DR VMs.</br>Assign custom roles.</br> Update the fence agent resource in cluster. |
+| SBD using Azure shared disk* | Configure new Azure Shared Disk on DR region. Attach Azure Shared Disk to DR VMs after failover.</br>[Set up Azure shared disk SBD device](high-availability-guide-suse-pacemaker.md#set-up-an-azure-shared-disk-sbd-device). |
+
+*ZRS for Azure shared disk is available in [limited regions](../../../virtual-machines/disks-redundancy.md#limitations).
+
+>[!Note]
+> We recommend to have same fencing mechanism for both primary and DR region for ease of operation and failover. It is not advised to have different fencing mechanism after failover to DR site.
+++
+### SAP Application Servers
+
+In primary region, the redundancy of SAP application servers is achieved by installing instances in multiple VMs. To have DR for SAP application servers, [Azure Site Recovery](../../../site-recovery/azure-to-azure-tutorial-enable-replication.md) can be set up for each application server VM. For shared storages (transport filesystem, interface data filesystem) that are attached to the application servers, follow the appropriate DR practice based on the type of [shared storage](disaster-recovery-overview-guide.md#storage).
+
+### SAP Database Servers
+
+For databases running SAP workload, use the native DBMS replication technology to configure DR. Use of Azure Site Recovery for databases isn't recommended, as it doesnΓÇÖt guarantee DB consistency and has [data churn limitation](../../../site-recovery/azure-to-azure-support-matrix.md#limits-and-data-change-rates). The replication technology for each database is different, so follow the respective database guidelines. Below table shows the list of databases used for SAP workloads and the corresponding DR recommendation.
+
+| Database | DR recommendation |
+| - | |
+| SAP HANA | [HANA System Replication (HSR)](sap-hana-availability-across-regions.md) |
+| Oracle | [Oracle Data Guard (FarSync)](../../../virtual-machines/workloads/oracle/oracle-reference-architecture.md#disaster-recovery-for-oracle-databases) |
+| IBM DB2 | [High availability disaster recovery (HADR)](dbms-guide-ha-ibm.md) |
+| Microsoft SQL | [Microsoft SQL Always On](dbms_guide_sqlserver.md#sql-server-always-on) |
+| SAP ASE | [ASE HADR Always On](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/installation-procedure-for-sybase-16-3-patch-level-3-always-on/ba-p/368199) |
+| SAP MaxDB | [Standby Database](https://wiki.scn.sap.com/wiki/pages/viewpage.action?pageId=72123826) |
+
+For cost optimized solution, you can even use backup and restore option for database DR strategy.
+
+## Back up and restore
+
+Backup and restore is other solution you can use to achieve disaster recovery for your SAP workloads if the business RTO and RPO are non-critical. You can use [Azure backup](../../../backup/backup-overview.md), a cloud based backup service to take copies of different component of your SAP workload like virtual machines, managed disks and supported databases. To learn more on the general support settings and limitations for Azure Backup scenarios and deployments, see [Azure Backup support matrix](../../../backup/backup-support-matrix.md).
+
+| Services | Component | Azure Backup Support |
+| -- | | -- |
+| Compute | [Azure VMs](../../../backup/backup-support-matrix-iaas.md) | Supported |
+| Storage | [Azure Managed Disks including shared disks](../../../backup/disk-backup-support-matrix.md) | Supported |
+| Storage | [Azure File Share - SMB (Standard or Premium)](../../../backup/azure-file-share-support-matrix.md) | Supported |
+| Storage | [Azure blobs](../../../backup/blob-backup-support-matrix.md) | Supported |
+| Storage | Azure File Shared - NFS (Standard or Premium) | Not Supported |
+| Storage | Azure NetApp Files | Not Supported |
+| Database | [SAP HANA database in Azure VMs](../../../backup/sap-hana-backup-support-matrix.md) | Supported |
+| Database | [SQL server in Azure VMs](../../../backup/sql-support-matrix.md) | Supported |
+| Database | [Oracle](../oracle/oracle-database-backup-azure-backup.md) | Supported* |
+| Database | IBM DB2, SAP ASE | Not Supported |
+
+>[!Note]
+>
+>*Azure backup support Oracle database using [Azure VM backup for database consistent snapshots](../../..//backup/backup-azure-linux-database-consistent-enhanced-pre-post.md).
+>
+> Azure backup doesnΓÇÖt support all Azure storages and databases that are used for SAP workload.
+
+Azure backup stores backups in recovery service vault, which replicates your data based on the chosen replication type (LRS, ZRS, or GRS). For [Geo-redundant storage (GRS)](../../../storage/common/storage-redundancy.md#geo-redundant-storage), your backup data is replicated to a paired secondary region. With [cross region restore](../../../backup/backup-support-matrix.md#cross-region-restore) feature enabled, you can restore data of the supported management type on the secondary region.
+
+Backup and restore are more traditional cost optimized approach but comes with a trade-off of higher RTO. As you need to restore all the applications from the backup if there's failover to DR region. So you need to analyze your business need and accordingly design a DR strategy.
+
+## References
+
+- [Tutorial: Set up disaster recovery for Azure VMs](../../../site-recovery/azure-to-azure-tutorial-enable-replication.md)
+- [Azure Backup service](../../../backup/backup-overview.md).
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- November 22, 2022: Update of [SAP workloads on Azure: planning and deployment checklist](sap-deployment-checklist.md) to add latest recommendations
- November 18, 2022: Add a recommendation to use Pacemaker simple mount configuration for new implementations on SLES 15 in [Azure VMs HA for SAP NW on SLES with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs HA for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs HA for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md) and [Azure VMs HA for SAP NW on SLES](high-availability-guide-suse.md) - November 15, 2022: Change in [HA for SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add recommendation to use mount option `nconnect` for workloads with higher throughput requirements - November 15, 2022: Add a recommendation for minimum required version of package resource-agents in [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md)
virtual-machines Sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-deployment-checklist.md
Title: SAP workload planning and deployment checklist | Microsoft Docs
+ Title: SAP workload planning and deployment checklist
description: Checklist for planning SAP workload deployments to Azure and deploying the workloads -+ tags: azure-resource-manager Previously updated : 02/02/2022 Last updated : 11/21/2022 - # SAP workloads on Azure: planning and deployment checklist
-This checklist is designed for customers moving SAP NetWeaver, S/4HANA, and Hybris applications to Azure infrastructure as a service. Throughout the duration of the project, a customer and/or SAP partner should review the checklist. It's important to note that many of the checks are completed at the beginning of the project and during the planning phase. After the deployment is done, straightforward changes on deployed Azure infrastructure or SAP software releases can become complex.
-
-Review the checklist at key milestones during your project. Doing so will enable you to detect small problems before they become large problems. You'll also have enough time to re-engineer and test any necessary changes. Don't consider this checklist complete. Depending on your situation, you might need to perform many more checks.
-
-The checklist doesn't include tasks that are independent of Azure. For example, SAP application interfaces change during a move to the Azure platform or to a hosting provider.
-
-This checklist can also be used for systems that are already deployed. New features, like Write Accelerator and Availability Zones, and new VM types might have been added since you deployed. So it's useful to review the checklist periodically to ensure you're aware of new features in the Azure platform.
-
-## Project preparation and planning phase
-During this phase, you plan the migration of your SAP workload to the Azure platform. At a minimum, during this phase you need to create the following documents and define and discuss the following elements of the migration:
-
-1. High-level design document. This document should contain:
- - The current inventory of SAP components and applications, and a target application inventory for Azure.
- - A responsibility assignment matrix (RACI) that defines the responsibilities and assignments of the parties involved. Start at a high level, and work to more granular levels throughout planning and the first deployments.
- - A high-level solution architecture.
- - A decision about which Azure regions to deploy to. See the [list of Azure regions](https://azure.microsoft.com/global-infrastructure/regions/). To learn which services are available in each region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
- - A networking architecture to connect from on-premises to Azure. Start to familiarize yourself with the [Virtual Datacenter blueprint for Azure](/azure/architecture/vdc/).
- - Security principles for running high-impact business data in Azure. To learn about data security, start with the [Azure security documentation](../../../security/index.yml).
-2. Technical design document. This document should contain:
- - A block diagram for the solution.
- - The sizing of compute, storage, and networking components in Azure. For SAP sizing of Azure VMs, see [SAP
- - note #1928533](https://launchpad.support.sap.com/#/notes/1928533).
- - Business continuity and disaster recovery architecture.
- - Detailed information about OS, DB, kernel, and SAP support pack versions. It's not necessarily true that every OS release supported by SAP NetWeaver or S/4HANA is supported on Azure VMs. The same is true for DBMS releases. Check the following sources to align and if necessary upgrade SAP releases, DBMS releases, and OS releases to ensure SAP and Azure support. You need to have release combinations supported by SAP and Azure to get full support from SAP and Microsoft. If necessary, you need to plan for upgrading some software components. More details on supported SAP, OS, and DBMS software are documented here:
- - [SAP support note #1928533](https://launchpad.support.sap.com/#/notes/1928533). This note defines the minimum OS releases supported on Azure VMs. It also defines the minimum database releases required for most non-HANA databases. Finally, it provides the SAP sizing for SAP-supported Azure VM types.
- - [SAP support note #2015553](https://launchpad.support.sap.com/#/notes/2015553). This note defines support policies around Azure storage and support relationship needed with Microsoft.
- - [SAP support note #2039619](https://launchpad.support.sap.com/#/notes/2039619). This note defines the Oracle support matrix for Azure. Oracle supports only Windows and Oracle Linux as guest operating systems on Azure for SAP workloads. This support statement also applies for the SAP application layer that runs SAP instances. However, Oracle doesn't support high availability for SAP Central Services in Oracle Linux through Pacemaker. If you need high availability for ASCS on Oracle Linux, you need to use SIOS Protection Suite for Linux. For detailed SAP certification data, see SAP support note [#1662610 - Support details for SIOS Protection Suite for Linux](https://launchpad.support.sap.com/#/notes/1662610). For Windows, the SAP-supported Windows Server Failover Clustering solution for SAP Central Services is supported in conjunction with Oracle as the DBMS layer.
- - [SAP support note #2235581](https://launchpad.support.sap.com/#/notes/2235581). This note provides the support matrix for SAP HANA on different OS releases.
- - SAP HANA-supported Azure VMs and [HANA Large Instances](./hana-overview-architecture.md) are listed on the [SAP website](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120).
- - [SAP Product Availability Matrix](https://support.sap.com/en/).
- - [SAP support note #2555629 - SAP HANA 2.0 Dynamic Tiering ΓÇô Hypervisor and Cloud Support](https://launchpad.support.sap.com/#/notes/2555629)
- - [SAP support note #1662610 - Support details for SIOS Protection Suite for Linux](https://launchpad.support.sap.com/#/notes/1662610)
- - SAP notes for other SAP-specific products.
- - Using multi-SID cluster configurations for SAP Central Services is supported on Windows, SLES and RHEL guest operating systems on Azure. Keep in mind that the blast radius can increase the more ASCS/SCS you place on such a multi-SID cluster. You can find documentation for the respective guest OS scenario in these articles:
- - [SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and shared disk on Azure](./sap-ascs-ha-multi-sid-wsfc-shared-disk.md)
- - [SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and file share on Azure](./sap-ascs-ha-multi-sid-wsfc-file-share.md)
- - [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications multi-SID guide](./high-availability-guide-suse-multi-sid.md)
- - [High availability for SAP NetWeaver on Azure VMs on Red Hat Enterprise Linux for SAP applications multi-SID guide](./high-availability-guide-rhel-multi-sid.md)
- - High availability and disaster recovery architecture.
- - Based on RTO and RPO, define what the high availability and disaster recovery architecture needs to look like.
- - For high availability within a zone, check what the desired DBMS has to offer in Azure. Most DBMS packages offer synchronous methods of a synchronous hot standby, which we recommend for production systems. Also check the SAP-related documentation for different databases, starting with [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md) and related documents.
- Using Windows Server Failover Clustering with a shared disk configuration for the DBMS layer as, for example, [described for SQL Server](/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server), isn't supported. Instead, use solutions like:
- - [SQL Server Always On](/previous-versions/azure/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-ps-sql-alwayson-availability-groups)
- - [Oracle Data Guard](../oracle/configure-oracle-dataguard.md)
- - [HANA System Replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/b74e16a9e09541749a745f41246a065e.html)
- - For disaster recovery across Azure regions, review the solutions offered by different DBMS vendors. Most of them support asynchronous replication or log shipping.
- - For the SAP application layer, determine whether you'll run your business regression test systems, which ideally are replicas of your production deployments, in the same Azure region or in your DR region. In the second case, you can target that business regression system as the DR target for your production deployments.
- - If you decide not to place the non-production systems in the DR site, look into Azure Site Recovery as a method for replicating the SAP application layer into the Azure DR region. For more information, see a [Set up disaster recovery for a multi-tier SAP NetWeaver app deployment](../../../site-recovery/site-recovery-sap.md).
- - If you decide to use a combined HADR configuration by using [Azure Availability Zones](../../../availability-zones/az-overview.md), familiarize yourself with the Azure regions where Availability Zones are available. Also take into account restrictions that can be introduced by increased network latencies between two Availability Zones.
-3. An inventory of all SAP interfaces (SAP and non-SAP).
-4. Design of foundation services. This design should include the following items:
- - Active Directory and DNS design.
- - Network topology within Azure and assignment of different SAP systems.
- - [Azure role-based access control (Azure RBAC)](../../../role-based-access-control/overview.md) structure for teams that manage infrastructure and SAP applications in Azure.
- - Resource group topology.
- - [Tagging strategy](../../../azure-resource-manager/management/tag-resources.md#tags-and-billing).
- - Naming conventions for VMs and other infrastructure components and/or logical names.
-5. Microsoft Professional or Premier Support contract. Identify your Microsoft Technical Account Manager (TAM) if you have a Premier support contract with Microsoft. For SAP support requirements, see [SAP support note #2015553](https://launchpad.support.sap.com/#/notes/2015553).
-6. The number of Azure subscriptions and core quota for the subscriptions. [Open support requests to increase quotas of Azure subscriptions](../../../azure-portal/supportability/regional-quota-requests.md) as needed.
-7. Data reduction and data migration plan for migrating SAP data into Azure. For SAP NetWeaver systems, SAP has guidelines on how to limit the volume of large amounts of data. See [this SAP guide](https://wiki.scn.sap.com/wiki/download/attachments/247399467/DVM_%20Guide_7.2.pdf?version=1&modificationDate=1549365516000&api=v2) about data management in SAP ERP systems. Some of the content also applies to NetWeaver and S/4HANA systems in general.
-8. An automated deployment approach. The goal of the automation of infrastructure deployments on Azure is to deploy in a deterministic way and get deterministic results. Many customers use PowerShell or CLI-based scripts. But there are various open-source technologies that you can use to deploy Azure infrastructure for SAP and even install SAP software. You can find examples on GitHub:
- - [Automated SAP Deployments in Azure Cloud](https://github.com/Azure/sap-automation)
- - [SAP HANA Installation](https://github.com/AzureCAT-GSI/SAP-HANA-ARM)
-9. Define a regular design and deployment review cadence between you as the customer, the system integrator, Microsoft, and other involved parties.
--
-## Pilot phase (strongly recommended)
-
+This checklist is designed for customers moving SAP applications to Azure infrastructure as a service. SAP applications in this document represent SAP products running the SAP kernel, including SAP NetWeaver, S/4HANA, BW and BW/4 and others. Throughout the duration of the project, a customer and/or SAP partner should review the checklist. It's important to note that many of the checks are completed at the beginning of the project and during the planning phase. After the deployment is done, straightforward changes on deployed Azure infrastructure or SAP software releases can become complex.
+
+Review the checklist at key milestones during your project. Doing so will enable you to detect small problems before they become large problems. You'll also have enough time to re-engineer and test any necessary changes. Don't consider this checklist complete. Depending on your situation, you might need to perform additional more checks.
+
+The checklist doesn't include tasks that are independent of Azure. For example, SAP application interfaces change during a move to the Azure platform or to a hosting provider. SAP documentation and support notes will also contain further tasks, which are not Azure specific but need to be part of your overall planning checklist.
+
+This checklist can also be used for systems that are already deployed. New features or changed recommendations might apply to your environment. It's useful to review the checklist periodically to ensure you're aware of new features in the Azure platform.
+
+Main content in this document is organized in tabs, in a typical project's chronological order. See content of each tab and consider each next tab to build on top of actions done and learnings obtained in the previous phase. For production migration, the content of **all** tabs should be considered and not just production tab only. To help you map typical project phases with the phase definition used in this article, consult the below table.
+
+| Deployment checklist phases | Example project phases or milestones |
+|:-|:--|
+| Preparation and planning phase | Project kick-off / design and definition phase |
+| Pilot phase | Early validation / proof of concept / pilot |
+| Non-production phase | Completion of the detailed design / non-production environment builds / testing phase |
+| Production preparation phase | Dress rehearsal / user acceptance testing / mock cut-over / go-live checks |
+| Go-live phase | Production cut-over and go-live |
+| Post-production phase | Hypercare / transition to business as usual |
+
+## [Planning phase](#tab/planning)
+
+### Project preparation and planning phase
+
+During this phase, you plan the migration of your SAP workload to the Azure platform. Documents such as [planning guide for SAP](./planning-guide.md) in Azure and [Cloud Adoption Framework for SAP](/azure/cloud-adoption-framework/scenarios/sap/plan) cover many topics and help as information in your preparation. At a minimum, during this phase you need to create the following documents, define, and discuss the following elements of the migration:
+
+#### High-level design document
+This document should contain:
+- The current inventory of SAP components and applications, and a target application inventory for Azure.
+- A responsibility assignment matrix (RACI) that defines the responsibilities and assignments of the parties involved. Start at a high level, and work to more granular levels throughout planning and the first deployments.
+- A high-level solution architecture. Best practices and example architectures from [Azure Architecture Center](/azure/architecture/reference-architectures/sap/sap-overview) should be consulted.
+- A decision about which Azure regions to deploy to. See the [list of Azure regions](https://azure.microsoft.com/global-infrastructure/regions/), and list of [regions with availability zone support](/azure/reliability/availability-zones-service-support). To learn which services are available in each region, see [products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+- A networking architecture to connect from on-premises to Azure. Start to familiarize yourself with the [Azure enterprise scale landing zone](/azure/cloud-adoption-framework/ready/enterprise-scale/) concept.
+- Security principles for running high-impact business data in Azure. To learn about data security, start with the Azure security documentation.
+- Storage strategy to cover block devices (Managed Disk) and shared filesystems (such as Azure Files or Azure NetApp Files) that should be further refined to file-system sizes and layouts in the technical design document.
+
+#### Technical design document
+This document should contain:
+- A block diagram for the solution showing the SAP and non-SAP applications and services
+- An [SAP Quicksizer project](http://www.sap.com/sizing) based on business document volumes. The output of the Quicksizer is then mapped to compute, storage, and networking components in Azure. Alternatively to SAP Quicksizer, diligent sizing based on current workload of source SAP systems. Taking into account the available information, such as DBMS workload reports, SAP EarlyWatch Reports, compute and storage performance indicators.
+- Business continuity and disaster recovery architecture.
+- Detailed information about OS, DB, kernel, and SAP support pack versions. It's not necessarily true that every OS release supported by SAP NetWeaver or S/4HANA is supported on Azure VMs. The same is true for DBMS releases. Check the following sources to align and if necessary, upgrade SAP releases, DBMS releases, and OS releases to ensure SAP and Azure support. You need to have release combinations supported by SAP and Azure to get full support from SAP and Microsoft. If necessary, you need to plan for upgrading some software components. More details on supported SAP, OS, and DBMS software are documented here:
+ - [What SAP software is supported for Azure deployments](./sap-supported-product-on-azure.md)
+ - [SAP note 1928533 - SAP Applications on Microsoft Azure: Supported Products and Azure VM types](https://launchpad.support.sap.com/#/notes/1928533). This note defines the minimum OS and DBMS releases supported on Azure VMs. Note also provides the SAP sizing for SAP-supported Azure VMs.
+ - [SAP note 2015553 - SAP on Microsoft Azure: Support prerequisites](https://launchpad.support.sap.com/#/notes/2015553). This note defines prerequisites around Azure storage. networking, monitoring, and support relationship needed with Microsoft.
+ - [SAP note 2039619](https://launchpad.support.sap.com/#/notes/2039619). This note defines the Oracle support matrix for Azure. Oracle supports only Windows and Oracle Linux as guest operating systems on Azure for SAP workloads. This support statement also applies for the SAP application layer that runs SAP instances, as long they contain Oracle Client.
+ - SAP HANA-supported Azure VMs are listed on the [SAP website](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Details for each entry contain specifics and requirements, including supported OS version. This might not match latest OS version as per [SAP note 2235581](https://launchpad.support.sap.com/#/notes/2235581).
+ - [SAP Product Availability Matrix](https://userapps.support.sap.com/sap/support/pam).
+
+Further included in same technical document(s) should be:
+- Storage Architecture high level decisions based on [Azure storage types for SAP workload](./planning-guide-storage.md)
+ - Managed Disks attached to each VM
+ - Filesystem layouts and sizing
+ - SMB and/or NFS volume layout and sizes, mount points where applicable
+- High availability, backup and disaster recovery architecture
+ - Based on RTO and RPO, define what the high availability and disaster recovery architecture needs to look like.
+ - Define the use of [availability zones](./sap-ha-availability-zones.md) for optimal protection or availability sets within a region.
+ - Considerations for Azure Virtual Machines DBMS deployment for SAP workloads and related documents. In Azure, using a shared disk configuration for the DBMS layer as, for example, [described for SQL Server](/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server), isn't supported. Instead, use solutions like:
+ - [SQL Server Always On](/previous-versions/azure/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-ps-sql-alwayson-availability-groups)
+ - [HANA System Replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/b74e16a9e09541749a745f41246a065e.html)
+ - [Oracle Data Guard](./dbms_guide_oracle.md#high-availability)
+ - [IBM Db2 HADR](./high-availability-guide-rhel-ibm-db2-luw.md)
+ - For disaster recovery across Azure regions, review the solutions offered by different DBMS vendors. Most of them support asynchronous replication or log shipping.
+ - For the SAP application layer, determine whether you'll run your business regression test systems, which ideally are replicas of your production deployments, in the same Azure region or in your DR region. In the second case, you can target that business regression system as the DR target for your production deployments.
+ - Look into Azure Site Recovery as a method for replicating the SAP application layer into the Azure DR region. For more information, see a [set-up disaster recovery for a multi-tier SAP NetWeaver app deployment](/azure/site-recovery/site-recovery-sap).
+ - For projects required to remain in a single region for compliance reasons, consider a combined HADR configuration by using [Azure Availability Zones](./sap-ha-availability-zones.md#combined-high-availability-and-disaster-recovery-configuration).
+- An inventory of all SAP interfaces and the connected systems (SAP and non-SAP)
+- Design of foundation services. This design should include the following items, many of which are covered by the [landing zone accelerator for SAP](/azure/cloud-adoption-framework/scenarios/sap/):
+ - Network topology within Azure and assignment of different SAP environment
+ - Active Directory and DNS design.
+ - Identity management solution for both end users and administration
+ - [Azure role-based access control (Azure RBAC)](../../../role-based-access-control/overview.md) structure for teams that manage infrastructure and SAP applications in Azure.
+ - Azure resource naming strategy
+ - Security operations for Azure resources and workloads within
+- Security concept for protecting your SAP workload. This should include all aspects ΓÇô networking and perimeter monitoring, application and database security, operating systems securing, and any infrastructure measures required, such as encryption. Identify the requirements with your compliance and security teams.
+- Microsoft recommends either Professional Direct, Premier or Unified Support contract. Identify your escalation paths and contacts for support with Microsoft. For SAP support requirements, see [SAP note 2015553](https://launchpad.support.sap.com/#/notes/2015553).
+- The number of Azure subscriptions and core quota for the subscriptions. [Open support requests to increase quotas of Azure subscriptions](../../../azure-portal/supportability/regional-quota-requests.md) as needed.
+- Data reduction and data migration plan for migrating SAP data into Azure. For SAP NetWeaver systems, SAP has guidelines on how to limit the volume of large amounts of data. See [this SAP guide](https://wiki.scn.sap.com/wiki/download/attachments/247399467/DVM_%20Guide_7.2.pdf?version=1&modificationDate=1549365516000&api=v2) about data management in SAP ERP systems. Some of the content also applies to NetWeaver and S/4HANA systems in general.
+- An automated deployment approach. Many customers start with scripts, using a combination of PowerShell, CLI, Ansible and Terraform.
+Microsoft developed solutions for SAP deployment automation are:
+ - [Azure Center for SAP solutions](/azure/center-sap-solutions/) ΓÇô Azure service to deploy and operate a SAP systemΓÇÖs infrastructure
+ - [SAP on Azure Deployment Automation](./automation-deployment-framework.md), an open-source orchestration tool for deploying and maintaining SAP environments
+
+> [!NOTE]
+> Define a regular design and deployment review cadence between you as the customer, the system integrator, Microsoft, and other involved parties.
+
+## [Pilot phase](#tab/pilot)
+
+### Pilot phase (strongly recommended)
+ You can run a pilot before or during project planning and preparation. You can also use the pilot phase to test approaches and designs made during the planning and preparation phase. And you can expand the pilot phase to make it a real proof of concept. We recommend that you set up and validate a full HADR solution and security design during a pilot deployment. Some customers perform scalability tests during this phase. Other customers use deployments of SAP sandbox systems as a pilot phase. We assume you've already identified a system that you want to migrate to Azure for the pilot.
-1. Optimize data transfer to Azure. The optimal choice is highly dependent on the specific scenario. Transfer from on-premises through [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) is fastest if the ExpressRoute circuit has enough bandwidth. In other scenarios, transferring through the internet is faster.
-2. For a heterogeneous SAP platform migration that involves an export and import of data, test and optimize the export and import phases. For large migrations in which SQL Server is the destination platform, you can find [recommendations](https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-the/SAP-OS-DB-Migration-to-SQL-Server-8211-FAQ-v6-2-April-2017/ba-p/368070). You can use Migration Monitor/SWPM if you don't need a combined release upgrade. You can use the [SAP DMO](https://blogs.sap.com/2013/11/29/database-migration-option-dmo-of-sum-introduction/) process when you combine the migration with an SAP release upgrade. To do so, you need to meet certain requirements for the source and target DBMS platform combination. This process is documented in [Database Migration Option (DMO) of SUM 2.0 SP03](https://launchpad.support.sap.com/#/notes/2631152).
- 1. Export to source, export file upload to Azure, and import performance. Maximize overlap between export and import.
- 2. Evaluate the volume of the database on the target and destination platforms for the purposes of infrastructure sizing.
- 3. Validate and optimize timing.
-1. Technical validation.
- 1. VM types.
- - Review the resources in SAP support notes, in the SAP HANA hardware directory, and in the SAP PAM again. Make sure there are no changes to supported VMs for Azure, supported OS releases for those VM types, and supported SAP and DBMS releases.
- - Validate again the sizing of your application and the infrastructure you deploy on Azure. If you're moving existing applications, you can often derive the necessary SAPS from the infrastructure you use and the [SAP benchmark webpage](https://www.sap.com/dmc/exp/2018-benchmark-directory/#/sd) and compare it to the SAPS numbers listed in [SAP support note #1928533](https://launchpad.support.sap.com/#/notes/1928533). Also keep [this article on SAPS ratings](https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-the/SAPS-ratings-on-Azure-VMs-8211-where-to-look-and-where-you-can/ba-p/368208) in mind.
- - Evaluate and test the sizing of your Azure VMs with regard to maximum storage throughput and network throughput of the VM types you chose during the planning phase. You can find the data here:
- - [Sizes for Windows virtual machines in Azure](../../sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json). It's important to consider the *max uncached disk throughput* for sizing.
- - [Sizes for Linux virtual machines in Azure](../../sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json). It's important to consider the *max uncached disk throughput* for sizing.
- 2. Storage.
- - Check the document [Azure Storage types for SAP workload](./planning-guide-storage.md)
- - At a minimum, use [Azure Standard SSD storage](../../disks-types.md#standard-ssds) for VMs that represent SAP application layers and for deployment of DBMSs that aren't performance sensitive.
- - In general, we don't recommend the use of [Azure Standard HDD disks](../../disks-types.md#standard-hdds).
- - Use [Azure Premium Storage](../../disks-types.md#premium-ssds) for any DBMS VMs that are remotely performance sensitive.
- - Use [Azure managed disks](https://azure.microsoft.com/services/managed-disks/).
- - Use Azure Write Accelerator for DBMS log drives with M-Series. Be aware of Write Accelerator limits and usage, as documented in [Write Accelerator](../../how-to-enable-write-accelerator.md).
- - For the different DBMS types, check the [generic SAP-related DBMS documentation](./dbms_guide_general.md) and the DBMS-specific documentation that the generic document points to.
- - For more information about SAP HANA, see [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md).
- - Never mount Azure data disks to an Azure Linux VM by using the device ID. Instead, use the universally unique identifier (UUID). Be careful when you use graphical tools to mount Azure data disks, for example. Double-check the entries in /etc/fstab to make sure the UUID is used to mount the disks. You can find more details in [this article](../../linux/attach-disk-portal.md#connect-to-the-linux-vm-to-mount-the-new-disk).
- 3. Networking.
- - Test and evaluate your virtual network infrastructure and the distribution of your SAP applications across or within the different Azure virtual networks.
- - Evaluate the hub-and-spoke virtual network architecture approach or the microsegmentation approach within a single Azure virtual network. Base this evaluation on:
- 1. Costs of data exchange between [peered Azure virtual networks](../../../virtual-network/virtual-network-peering-overview.md). For information about costs, see [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
- 2. Advantages of a fast disconnection of the peering between Azure virtual networks as opposed to changing the network security group to isolate a subnet within a virtual network. This evaluation is for cases when applications or VMs hosted in a subnet of the virtual network became a security risk.
- 3. Central logging and auditing of network traffic between on-premises, the outside world, and the virtual datacenter you built in Azure.
- - Evaluate and test the data path between the SAP application layer and the SAP DBMS layer.
- - Placement of [Azure network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) in the communication path between the SAP application and the DBMS layer of SAP systems based on SAP NetWeaver, Hybris, or S/4HANA isn't supported.
- - Placement of the SAP application layer and SAP DBMS in different Azure virtual networks that aren't peered isn't supported.
- - You can use [application security group and network security group rules](../../../virtual-network/network-security-groups-overview.md) to define routes between the SAP application layer and the SAP DBMS layer.
- - Make sure that [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is enabled on the VMs used in the SAP application layer and the SAP DBMS layer. Keep in mind that different OS levels are needed to support Accelerated Networking in Azure:
- - Windows Server 2012 R2 or later.
- - SUSE Linux 12 SP3 or later.
- - RHEL 7.4 or later.
- - Oracle Linux 7.5. If you're using the RHCKL kernel, release 3.10.0-862.13.1.el7 is required. If you're using the Oracle UEK kernel, release 5 is required.
- - Test and evaluate the network latency between the SAP application layer VMs and DBMS VMs according to SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Evaluate the results against the network latency guidance in [SAP support note #1100926](https://launchpad.support.sap.com/#/notes/1100926/E). The network latency should be in the moderate or good range. Exceptions apply to traffic between VMs and HANA Large Instance units, as documented in [this article](./hana-network-architecture.md#networking-architecture-for-hana-large-instance).
- - Make sure ILB deployments are set up to use Direct Server Return. This setting will reduce latency when Azure ILBs are used for high availability configurations on the DBMS layer.
- - If you're using Azure Load Balancer together with Linux guest operating systems, check that the Linux network parameter **net.ipv4.tcp_timestamps** is set to **0**. This recommendation conflicts with recommendations in older versions of [SAP note #2382421](https://launchpad.support.sap.com/#/notes/2382421). The SAP note is now updated to state that this parameter needs to be set to **0** to work with Azure load balancers.
- - Consider using [Azure proximity placement groups](../../co-location.md) to get optimal network latency. For more information, see [Azure proximity placement groups for optimal network latency with SAP applications](sap-proximity-placement-scenarios.md).
- 4. High availability and disaster recovery deployments.
- - If you deploy the SAP application layer without defining a specific Azure Availability Zone, make sure that all VMs that run SAP dialog instances or middleware instances of a single SAP system are deployed in an [availability set](../../availability-set-overview.md).
- - If you don't need high availability for SAP Central Services and the DBMS, you can deploy these VMs into the same availability set as the SAP application layer.
- - If you protect SAP Central Services and the DBMS layer for high availability by using passive replication, place the two nodes for SAP Central Services in one separate availability set and the two DBMS nodes in another availability set.
- - If you deploy into Azure Availability Zones, you can't use availability sets. But you do need to make sure you deploy the active and passive Central Services nodes into two different Availability Zones. Use Availability Zones that have the lowest latency between them.
- Keep in mind that you need to use [Azure Standard Load Balancer](../../../load-balancer/load-balancer-standard-availability-zones.md) for the use case of establishing Windows or Pacemaker failover clusters for the DBMS and SAP Central Services layer across Availability Zones. You can't use [Basic Load Balancer](../../../load-balancer/load-balancer-overview.md) for zonal deployments.
- 5. Timeout settings.
- - Check the SAP NetWeaver developer traces of the SAP instances to make sure there are no connection breaks between the enqueue server and the SAP work processes. You can avoid these connection breaks by setting these two registry parameters:
- - HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveTime = 120000. For more information, see [KeepAliveTime](/previous-versions/windows/it-pro/windows-2000-server/cc957549(v=technet.10)).
- - HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveInterval = 120000. For more information, see [KeepAliveInterval](/previous-versions/windows/it-pro/windows-2000-server/cc957548(v=technet.10)).
- - To avoid GUI timeouts between on-premises SAP GUI interfaces and SAP application layers deployed in Azure, check whether these parameters are set in the default.pfl or the instance profile:
- - rdisp/keepalive_timeout = 3600
- - rdisp/keepalive = 20
- - To prevent disruption of established connections between the SAP enqueue process and the SAP work processes, you need to set the **enque/encni/set_so_keepalive** parameter to **true**. See also [SAP note #2743751](https://launchpad.support.sap.com/#/notes/2743751).
- - If you use a Windows failover cluster configuration, make sure that the time to react on non-responsive nodes is set correctly for Azure. The article [Tuning Failover Cluster Network Thresholds](https://techcommunity.microsoft.com/t5/Failover-Clustering/Tuning-Failover-Cluster-Network-Thresholds/ba-p/371834) lists parameters and how they affect failover sensitivities. Assuming the cluster nodes are in the same subnet, you should change these parameters:
- - SameSubNetDelay = 2000
- - SameSubNetThreshold = 15
- - RoutingHistorylength = 30
- 6. OS Settings or Patches
- - For running HANA on SAP, read these notes and documentations:
- - [SAP support note #2814271 - SAP HANA Backup fails on Azure with Checksum Error](https://launchpad.support.sap.com/#/notes/2814271)
- - [SAP support note #2753418 - Potential Performance Degradation Due to Timer Fallback](https://launchpad.support.sap.com/#/notes/2753418)
- - [SAP support note #2791572 - Performance Degradation Because of Missing VDSO Support For Hyper-V in Azure](https://launchpad.support.sap.com/#/notes/2791572)
- - [SAP support note #2382421 - Optimizing the Network Configuration on HANA- and OS-Level](https://launchpad.support.sap.com/#/notes/2382421)
- - [SAP support note #2694118 - Red Hat Enterprise Linux HA Add-On on Azure](https://launchpad.support.sap.com/#/notes/2694118)
- - [SAP support note #1984787 - SUSE LINUX Enterprise Server 12: Installation notes](https://launchpad.support.sap.com/#/notes/1984787)
- - [SAP support note #2002167 - Red Hat Enterprise Linux 7.x: Installation and Upgrade](https://launchpad.support.sap.com/#/notes/0002002167)
- - [SAP support note #2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/0002292690)
- - [SAP support note #2772999 - Red Hat Enterprise Linux 8.x: Installation and Configuration](https://launchpad.support.sap.com/#/notes/2772999)
- - [SAP support note #2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782)
- - [SAP support note #2578899 - SUSE Linux Enterprise Server 15: Installation Note](https://launchpad.support.sap.com/#/notes/2578899)
- - [SAP support note #2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/0002455582)
- - [SAP support note #2729475 - HWCCT Failed with Error "Hypervisor is not supported" on Azure VMs certified for SAP HANA](https://launchpad.support.sap.com/#/notes/2729475)
-1. Test your high availability and disaster recovery procedures.
- 1. Simulate failover situations by shutting down VMs (Windows guest operating systems) or putting operating systems in panic mode (Linux guest operating systems). This step will help you figure out whether your failover configurations work as designed.
- 1. Measure how long it takes to execute a failover. If the times are too long, consider:
- - For SUSE Linux, use SBD devices instead of the Azure Fence agent to speed up failover.
- - For SAP HANA, if the reload of data takes too long, consider provisioning more storage bandwidth.
- 3. Test your backup/restore sequence and timing and make corrections if you need to. Make sure that backup times are sufficient. You also need to test the restore and time restore activities. Make sure that restore times are within your RTO SLAs wherever your RTO relies on a database or VM restore process.
- 4. Test cross-region DR functionality and architecture.
-1. Security checks.
- 1. Test the validity of your Azure role-based access control (Azure RBAC) architecture. The goal is to separate and limit the access and permissions of different teams. For example, SAP Basis team members should be able to deploy VMs and assign disks from Azure Storage into a given Azure virtual network. But the SAP Basis team shouldn't be able to create its own virtual networks or change the settings of existing virtual networks. Members of the network team shouldn't be able to deploy VMs into virtual networks in which SAP application and DBMS VMs are running. Nor should members of this team be able to change attributes of VMs or even delete VMs or disks.
- 1. Verify that [network security group and ASC](../../../virtual-network/network-security-groups-overview.md) rules work as expected and shield the protected resources.
- 1. Make sure that all resources that need to be encrypted are encrypted. Define and implement processes to back up certificates, store and access those certificates, and restore the encrypted entities.
- 1. For storage encryption, server-side encrption with platform managed key (SSE-PMK) is enabled for every managed disks in Azure by default. [Key management](../../disk-encryption.md) with customer managed keys can be considered, if required for customer owned key rotation.
- 1. [Host based server-side encryption](../../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data) should not be enabled for performance reasons on M-series family Linux VMs.
- 1. Do not use Azure Disk Encryption with SAP as [OS images](../../linux/disk-encryption-overview.md#supported-operating-systems) for SAP are not supported.
- 1. Database native encryption can be considered, such as transparent data encryption (TDE). Encryption key management and location must be secured. Database encryption occurs inside the VM and is independent of any storage encryption such as SSE.
-1. Performance testing. In SAP, based on SAP tracing and measurements, make these comparisons:
- - Where applicable, compare the top 10 online reports to your current implementation.
- - Where applicable, compare the top 10 batch jobs to your current implementation.
- - Compare data transfers through interfaces into the SAP system. Focus on interfaces where you know the transfer is going between different locations, like from on-premises to Azure.
--
-## Non-production phase
+- Optimize data transfer to Azure. The optimal choice is highly dependent on the specific scenario. If private connectivity is required for database replication, [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) is fastest if the ExpressRoute circuit has enough bandwidth. In any other scenario, transferring through the internet is faster. Optionally use a dedicated migration VPN for private connectivity to Azure. Any migration network path during pilot should mirror the use for future production systems, eliminating any impact to workloads ΓÇô SAP or non-SAP - already running in Azure.
+- For a heterogeneous SAP migration that involves an export and import of data, test and optimize the export and import phases. For migration of large SAP environments, go through available best practices. Use the appropriate tool for the migration scenario, depending on your source and target SAP releases, DBMS and if combining migration with other tasks such as release upgrade or even Unicode or S/4HANA conversion. SAP provides Migration Monitor/SWPM, [SAP DMO](https://blogs.sap.com/2013/11/29/database-migration-option-dmo-of-sum-introduction/) or DMO with system move, besides other approaches to minimize downtime available as separate service from SAP. In the latest releases of SAP DMO with system move, the use of azcopy for data transfer over the internet is supported as well, enabling the quickest network path natively.
+ [Migrate very large databases (VLDB) to Azure for SAP](/training/modules/migrate-very-large-databases-to-azure/)
+
+### Technical validation
+
+- **Compute / VM types**
+ - Review the resources in SAP support notes, in the SAP HANA hardware directory, and in the SAP PAM again. Make sure to match supported VMs for Azure, supported OS releases for those VM types, and supported SAP and DBMS releases.
+ - Validate again the sizing of your application and the infrastructure you deploy on Azure. If you're moving existing applications, you can often derive the necessary SAPS from the infrastructure you use and the [SAP benchmark webpage](https://www.sap.com/dmc/exp/2018-benchmark-directory/#/sd) and compare it to the SAPS numbers listed in [SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533). Also keep [this article on SAPS ratings](https://techcommunity.microsoft.com/t5/Running-SAP-Applications-on-the/SAPS-ratings-on-Azure-VMs-8211-where-to-look-and-where-you-can/ba-p/368208) in mind.
+ - Evaluate and test the sizing of your Azure VMs for maximum storage and network throughput of the VM types you chose during the planning phase. Details of [VM performance limits](/azure/virtual-machines/sizes) are available, for storage itΓÇÖs important to consider the limit of max uncached disk throughput for sizing. Carefully consider sizing and temporary effects of [disk and VM level bursting](/azure/virtual-machines/disk-bursting).
+ - Test and determine whether you want to create your own OS images for your VMs in Azure or whether you want to use an image from the Azure compute gallery (formerly known as shared image gallery). If you're using an image from the Azure compute gallery, make sure to use an image that reflects the support contract with your OS vendor. For some OS vendors, Azure Compute Gallery lets you bring your own license images. For other OS images, support is included in the price quoted by Azure.
+ - Using own OS images allows you to store required enterprise dependencies, such as security agents, hardening and customizations directly in the image. This way they are deployed immediately with every VM. If you decide to create your own OS images, you can find documentation in these articles:
+ - [Build a generalized image of a Windows VM deployed in Azure](/azure/virtual-machines/windows/capture-image-resource)
+ - [Build a generalized image of a Linux VM deployed in Azure](/azure/virtual-machines/linux/capture-image)
+ - If you use Linux images from the Azure compute gallery and add hardening as part of your deployment pipeline, you need to use the images suitable for SAP provided by the Linux vendors.
+ - [Red Hat Enterprise Linux for SAP Offerings on Microsoft Azure FAQ](https://access.redhat.com/articles/5456301)
+ - [SUSE public cloud information tracker - OS Images for SAP](https://pint.suse.com/?resource=images&csp=microsoft&search=sap)
+ - [Oracle Linux](https://www.oracle.com/cloud/azure/interconnect/faq/)
+ - Choosing an OS image determines the type of Azure VMΓÇÖs generation. Azure supports both [Hyper-V generation 1 and 2 VMs](/azure/virtual-machines/generation-2). Some VM families are available as [generation 2 only](/azure/virtual-machines/generation-2#generation-2-vm-sizes), some VM families are certified for SAP use as generation 2 only ([SAP note 1928533](https://launchpad.support.sap.com/#/notes/1928533)) even if Azure allows both generations. **It is recommended to use generation 2 VM for every VM of SAP landscape.**
+
+- **Storage**
+ - Read the document [Azure storage types for SAP workload](./planning-guide-storage.md)
+ - Use [Azure premium storage](/azure/virtual-machines/disks-types#premium-ssds), [premium storage v2](/azure/virtual-machines/disks-types#premium-ssd-v2) for all production grade SAP environments and when ensuring high SLA. For some DBMS, Azure NetApp Files can be used for [large parts of the overall storage requirements](./planning-guide-storage.md#azure-netapp-files-anf).
+ - At a minimum, use [Azure standard SSD](/azure/virtual-machines/disks-types#standard-ssds) storage for VMs that represent SAP application layers and for deployment of DBMSs that aren't performance sensitive. Keep in mind different Azure storage types influence the [single VM availability SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines).
+ - In general, we don't recommend the use of [Azure standard HDD](./planning-guide-storage.md#azure-standard-hdd-storage) disks for SAP.
+ - For the different DBMS types, check the [generic SAP-related DBMS documentation](./dbms_guide_general.md) and DBMS-specific documentation that the first document points to. Use disk striping over multiple disks with premium storage (v1 or v2) for database data and log area.
+ - For optimal storage configuration with SAP HANA, see [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md).
+ - Use LVM for all disks on Linux VMs, as it allows easier management and online expansion. This includes volumes on single disks, for example /usr/sap.
+
+- **Networking**
+ - Test and evaluate your virtual network infrastructure and the distribution of your SAP applications across or within the different Azure virtual networks.
+ - Evaluate the hub-and-spoke or virtual WAN virtual network architecture approach with discrete virtual network(s) spokes for SAP workload. For smaller scale, micro-segmentation approach within a single Azure virtual network. Base this evaluation on:
+ - Costs of data exchange [between peered Azure virtual networks](/azure/virtual-network/virtual-network-peering-overview)
+ - Advantages of a fast disconnection of the peering between Azure virtual networks as opposed to changing the network security group to isolate a subnet within a virtual network. This evaluation is for cases when applications or VMs hosted in a subnet of the virtual network became a security risk.
+ - Central logging and auditing of network traffic between on-premises, the outside world, and the virtual datacenter you built in Azure.
+ - Evaluate and test the data path between the SAP application layer and the SAP DBMS layer.
+ - Placement of [Azure network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) in the communication path between the SAP application and the DBMS layer of SAP systems running the SAP kernel isn't supported.
+ - Placement of the SAP application layer and SAP DBMS in different Azure virtual networks that aren't peered isn't supported.
+ - You can use [application security group and network security group rules](/azure//virtual-network/network-security-groups-overview) to secure communication paths to and between the SAP application layer and the SAP DBMS layer.
+ - Make sure that [accelerated networking](/azure/virtual-network/accelerated-networking-overview) is enabled on every VM used for SAP.
+ - Test and evaluate the network latency between the SAP application layer VMs and DBMS VMs according to SAP notes [500235](https://launchpad.support.sap.com/#/notes/500235) and [1100926](https://launchpad.support.sap.com/#/notes/1100926). In addition to SAPΓÇÖs niping, you can use tools such as [sockperf](https://github.com/Mellanox/sockperf) or [ethr](https://github.com/microsoft/ethr) for tcp latency measurement. Evaluate the results against the network latency guidance in [SAP note 1100926](https://launchpad.support.sap.com/#/notes/1100926). The network latency should be in the moderate or good range.
+ - Optimize network throughput on high vCPU VMs, typically used for database servers. Particularly important for HANA scale-out and any large SAP system. Follow recommendations in [this article](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimizing-network-throughput-on-azure-m-series-vms/ba-p/3581129) for optimization.
+ - If deploying with availability sets and latency measurement values are not meeting SAP requirements in [SAP note 1100926](https://launchpad.support.sap.com/#/notes/1100926), consider guidance in article [proximity placement groups](./sap-proximity-placement-scenarios.md) to get optimal network latency. No usage of proximity placement groups for zonal or cross-zonal deployment patterns.
+ - Verify correct availability, routing and secure access from the SAP landscape to any needed Internet endpoint, such as OS patch repositories, deployment tooling or service endpoint. Similarly, if your SAP environment provides a publicly accessible service such as SAP Fiori or SAProuter, verify it is reachable and secured.
+
+- **High availability and disaster recovery deployments**
+ - Always use standard load balancer for clustered environments. Basic load balancer will be [retired](/azure/load-balancer/skus).
+ - If you deploy the SAP application layer without defining a specific availability zone, make sure that all VMs that run SAP dialog instances or middleware instances of a single SAP system are deployed in an [availability set](/azure/virtual-machines/availability-set-overview).
+ - If you don't need high availability for SAP Central Services and the DBMS, you can deploy these VMs into the same availability set as the SAP application layer.
+ - When you protect SAP Central Services and the DBMS layer for high availability by using passive replication, place the two nodes for SAP Central Services in one separate availability set and the two DBMS nodes in another availability set.
+ - If you deploy into [availability zones](./sap-ha-availability-zones.md), you can't combine with availability sets. But you do need to make sure you deploy the active and passive central services nodes into two different availability zones. Use two availability zones that have the lowest latency between them.
+ - If you're using Azure Load Balancer together with Linux guest operating systems, check that the Linux network parameter net.ipv4.tcp_timestamps is set to 0. This recommendation conflicts with recommendations in older versions of [SAP note 2382421](https://launchpad.support.sap.com/#/notes/2382421). The SAP note is now updated to state that this parameter needs to be set to 0 to work with Azure load balancers.
+
+- **Timeout settings**
+ - Check the SAP NetWeaver developer traces of the SAP instances to make sure there are no connection breaks between the enqueue server and the SAP work processes. You can avoid these connection breaks by setting these two registry parameters:
+ - HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveTime = 120000. For more information, see [KeepAliveTime](/previous-versions/windows/it-pro/windows-2000-server/cc957549(v=technet.10)).
+ - HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveInterval = 120000. For more information, see [KeepAliveInterval](/previous-versions/windows/it-pro/windows-2000-server/cc957548(v=technet.10)).
+ - To avoid GUI timeouts between on-premises SAP GUI interfaces and SAP application layers deployed in Azure, check whether these parameters are set in the default.pfl or the instance profile:
+ - rdisp/keepalive_timeout = 3600
+ - rdisp/keepalive = 20
+ - To prevent disruption of established connections between the SAP enqueue process and the SAP work processes, you need to set the enque/encni/set_so_keepalive parameter to true. See also [SAP note 2743751](https://launchpad.support.sap.com/#/notes/2743751).
+ - If you use a Windows failover cluster configuration, make sure that the time to react on non-responsive nodes is set correctly for Azure. The article [Tuning Failover Cluster Network Thresholds](https://techcommunity.microsoft.com/t5/Failover-Clustering/Tuning-Failover-Cluster-Network-Thresholds/ba-p/371834) lists parameters and how they affect failover sensitivities. Assuming the cluster nodes are in the same subnet, you should change these parameters:
+ - SameSubNetDelay = 2000 (number of milliseconds between ΓÇ£heartbeatsΓÇ¥)
+ - SameSubNetThreshold = 15 (maximum number of consecutive missed heartbeats)
+ - RoutingHistorylength = 30 (seconds, 2000 ms * 15 heartbeats = 30s)
+
+- **OS Settings or Patches**
+ - For running HANA on SAP, read these notes and documentations, in addition to SAP' non-Azure specific documentation and other support notes:
+ - [Azure specific SAP notes](https://launchpad.support.sap.com/#/mynotes?tab=Search&sortBy=Relevance&filters=themk%25253Aeq~'BC-OP-NT-AZR'~'BC-OP-LNX-AZR'%25252BreleaseStatus%25253Aeq~'NotRestricted'%25252BsecurityPatchDay%25253Aeq~'NotRestricted'%25252BfuzzyThreshold%25253Aeq~'0.9') linked to SAP support components BC-OP-NT-AZR or BC-OP-LNX-AZR. Go through the notes in detail as they contain updated solutions
+ - [SAP note 2382421 - Optimizing the Network Configuration on HANA- and OS-Level](https://launchpad.support.sap.com/#/notes/2382421)
+ - [SAP note 2235581 ΓÇô SAP HANA: Supported Operating Systems](https://launchpad.support.sap.com/#/notes/2235581)
+
+### Additional checks for the pilot phase
+
+- **Test your high availability and disaster recovery procedures**
+ - Simulate failover situations by using a tool such as [NotMyFault](/sysinternals/downloads/notmyfault) (Windows) or putting operating systems in panic mode or disabling network interface with ifdown (Linux). This step will help you figure out whether your failover configurations work as designed.
+ - Measure how long it takes to execute a failover. If the times are too long, consider:
+ - For SUSE Linux, use SBD devices instead of the Azure Fence agent to speed up failover.
+ - For SAP HANA, if the reload of data takes too long, consider provisioning more storage bandwidth.
+ - Test your backup/restore sequence and timing and make corrections if you need to. Make sure that backup times are sufficient. You also need to test the restore and time restore activities. Make sure that restore times are within your RTO SLAs wherever your RTO relies on a database or VM restore process.
+ - Test cross-region DR functionality and architecture, verify the RPO and RTO match your expectations
+
+- **Security checks**
+ - Test the validity of your Azure role-based access control (Azure RBAC) architecture. Segregation of duties requires to separate and limit the access and permissions of different teams. For example, SAP Basis team members should be able to deploy VMs and assign disks from Azure Storage into a given Azure virtual machine. But the SAP Basis team shouldn't be able to create its own virtual networks or change the settings of existing virtual networks. Members of the network team shouldn't be able to deploy VMs into virtual networks in which SAP application and DBMS VMs are running. Nor should members of this team be able to change attributes of VMs or even delete VMs or disks.
+ - Verify that [network security group and ASG rules](/azure/virtual-network/network-security-groups-overview) work as expected and shield the protected resources.
+ - Make sure that all resources that need to be encrypted are encrypted. Define and implement processes to back up certificates, store and access those certificates, and restore the encrypted entities.
+ - For storage encryption, server-side encryption with platform managed key (SSE-PMK) is enabled for every storage service used for SAP in Azure by default, including managed disks, Azure Files and Azure NetApp Files. [Key management](/azure/virtual-machines/disk-encryption) with customer managed keys can be considered, if required for customer owned key rotation.
+ - [Host based server-side encryption](/azure/virtual-machines/disk-encryption#encryption-at-hostend-to-end-encryption-for-your-vm-data) should not be enabled for performance reasons on M-series family Linux VMs.
+ - Do not use Azure Disk Encryption on Linux with SAP as [OS images ΓÇÿfor SAPΓÇÖ](/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems) are not supported.
+ - Database native encryption is deployed by most SAP on Azure customers to protect DBMS data and backups. Transparent Data Encryption (TDE) typically has no noticeable performance overhead, greatly increases security, and should be considered. Encryption key management and location must be secured. Database encryption occurs inside the VM and is independent of any storage encryption such as SSE.
+
+- **Performance testing**
+In SAP, based on SAP tracing and measurements, make these comparisons:
+ - Inventory and baseline the current on-premises system
+ - SAR reports / perfmon
+ - STAT trace top 10 online reports
+ - Collect batch job history
+ - Focus testing to verify business processes performance. Do not compare hardware KPIs initially and in a vacuum, only when troubleshooting any performance differences.
+ - Where applicable, compare the top 10 online reports to your current implementation.
+ - Where applicable, compare the top 10 batch jobs to your current implementation.
+ - Compare data transfers through interfaces into the SAP system. Focus on interfaces where you know the transfer is going between different locations, like from on-premises to Azure.
+
+## [Non-production phase](#tab/non-prod)
+
+### Non-production phase
+ In this phase, we assume that after a successful pilot or proof of concept (POC), you're starting to deploy non-production SAP systems to Azure. Incorporate everything you learned and experienced during the POC to this deployment. All the criteria and steps listed for POCs apply to this deployment as well.
-During this phase, you usually deploy development systems, unit testing systems, and business regression testing systems to Azure. We recommend that at least one non-production system in one SAP application line has the full high availability configuration that the future production system will have. Here are some additional steps that you need to complete during this phase:
-
-1. Before you move systems from the old platform to Azure, collect resource consumption data, like CPU usage, storage throughput, and IOPS data. Especially collect this data from the DBMS layer units, but also collect it from the application layer units. Also measure network and storage latency.
-2. Record the availability usage time patterns of your systems. The goal is to figure out whether non-production systems need to be available all day, every day or whether there are non-production systems that can be shut down during certain phases of a week or month.
-3. Test and determine whether you want to create your own OS images for your VMs in Azure or whether you want to use an image from the Azure Azure Compute Gallery (formerly known as Shared Image Gallery). If you're using an image from the Azure Compute Gallery, make sure to use an image that reflects the support contract with your OS vendor. For some OS vendors, Azure Compute Gallery lets you bring your own license images. For other OS images, support is included in the price quoted by Azure. If you decide to create your own OS images, you can find documentation in these articles:
- - [Build a generalized image of a Windows VM deployed in Azure](../../windows/capture-image-resource.md)
- - [Build a generalized image of a Linux VM deployed in Azure](../../linux/capture-image.md)
-3. If you use SUSE and Red Hat Linux images from the Azure Compute Gallery, you need to use the images for SAP provided by the Linux vendors in the Azure Compute Gallery.
-4. Make sure to fulfill the SAP support requirements for Microsoft support agreements. See [SAP support note #2015553](https://launchpad.support.sap.com/#/notes/2015553). For HANA Large Instances, see [Onboarding requirements](./hana-onboarding-requirements.md).
-4. Make sure the right people get [planned maintenance notifications](https://azure.microsoft.com/blog/a-new-planned-maintenance-experience-for-your-virtual-machines/) so you can choose the best downtimes.
-5. Frequently check for Azure presentations on channels like [Channel 9](/teamblog/channel9joinedmicrosoftlearn) for new functionality that might apply to your deployments.
-6. Check SAP notes related to Azure, like [support note #1928533](https://launchpad.support.sap.com/#/notes/1928533), for new VM SKUs and newly supported OS and DBMS releases. Compare the pricing of new VM types against that of older VM types, so you can deploy VMs with the best price/performance ratio.
-7. Recheck SAP support notes, the SAP HANA hardware directory, and the SAP PAM. Make sure there were no changes in supported VMs for Azure, supported OS releases on those VMs, and supported SAP and DBMS releases.
-8. Check [the SAP website](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) for new HANA-certified SKUs in Azure. Compare the pricing of new SKUs with the ones you planned to use. Eventually, make necessary changes to use the ones that have the best price/performance ratio.
-9. Adapt your deployment scripts to use new VM types and incorporate new Azure features that you want to use.
-10. After deployment of the infrastructure, test and evaluate the network latency between SAP application layer VMs and DBMS VMs, according to SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Evaluate the results against the network latency guidance in [SAP support note #1100926](https://launchpad.support.sap.com/#/notes/1100926/E). The network latency should be in the moderate or good range. Exceptions apply to traffic between VMs and HANA Large Instance units, as documented in [this article](./hana-network-architecture.md#networking-architecture-for-hana-large-instance). Make sure that none of the restrictions mentioned in [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md#azure-network-considerations) and [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md) apply to your deployment.
-11. Make sure your VMs are deployed to the correct [Azure proximity placement group](../../co-location.md), as described in [Azure proximity placement groups for optimal network latency with SAP applications](sap-proximity-placement-scenarios.md).
-11. Perform all the other checks listed for the proof of concept phase before applying the workload.
-12. As the workload applies, record the resource consumption of the systems in Azure. Compare this consumption with records from your old platform. Adjust VM sizing of future deployments if you see that you have large differences. Keep in mind that when you downsize, storage, and network bandwidths of VMs will be reduced as well.
- - [Sizes for Windows virtual machines in Azure](../../sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
- - [Sizes for Linux virtual machines in Azure](../../sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
-13. Experiment with system copy functionality and processes. The goal is to make it easy for you to copy a development system or a test system, so project teams can get new systems quickly.
-14. Optimize and hone your team's Azure role-based access, permissions, and processes to make sure you have separation of duties. At the same time, make sure all teams can perform their tasks in the Azure infrastructure.
-15. Exercise, test, and document high-availability and disaster recovery procedures to enable your staff to execute these tasks. Identify shortcomings and adapt new Azure functionality that you're integrating into your deployments.
--
-## Production preparation phase
-In this phase, collect what you experienced and learned during your non-production deployments and apply it to future production deployments. You also need to prepare the work of the data transfer between your current hosting location and Azure.
-
-1. Complete necessary SAP release upgrades of your production systems before moving to Azure.
-1. Agree with the business owners on functional and business tests that need to be conducted after migration of the production system.
-1. Make sure these tests are completed with the source systems in the current hosting location. Avoid conducting tests for the first time after the system is moved to Azure.
-1. Test the process of migrating production systems to Azure. If you're not moving all production systems to Azure during the same time frame, build groups of production systems that need to be at the same hosting location. Test data migration. Here are some common methods:
- - Use DBMS methods like backup/restore in combination with SQL Server Always On, HANA System Replication, or Log shipping to seed and synchronize database content in Azure.
- - Use backup/restore for smaller databases.
- - Use SAP Migration Monitor, which is integrated into SAP SWPM, to perform heterogeneous migrations.
- - Use the [SAP DMO](https://blogs.sap.com/2013/11/29/database-migration-option-dmo-of-sum-introduction/) process if you need to combine your migration with an SAP release upgrade. Keep in mind that not all combinations of source DBMS and target DBMS are supported. You can find more information in the specific SAP support notes for the different releases of DMO. For example, [Database Migration Option (DMO) of SUM 2.0 SP04](https://launchpad.support.sap.com/#/notes/2644872).
- - Test whether data transfer throughput is better through the internet or through ExpressRoute, in case you need to move backups or SAP export files. If you're moving data through the internet, you might need to change some of your network security group/application security group rules that you'll need to have in place for future production systems.
-1. Before moving systems from your old platform to Azure, collect resource consumption data. Useful data includes CPU usage, storage throughput, and IOPS data. Especially collect this data from the DBMS layer units, but also collect it from the application layer units. Also measure network and storage latency.
-1. Recheck SAP support notes and the required OS settings, the SAP HANA hardware directory, and the SAP PAM. Make sure there were no changes in supported VMs for Azure, supported OS releases in those VMs, and supported SAP and DBMS releases.
-1. Update deployment scripts to take into account the latest decisions you've made on VM types and Azure functionality.
-1. After deploying infrastructure and applications, validate that:
- - The correct VM types were deployed, with the correct attributes and storage sizes.
- - The VMs are on the correct and desired OS releases and patches and are uniform.
- - VMs are hardened as required and in a uniform way.
- - The correct application releases and patches were installed and deployed.
- - The VMs were deployed into Azure availability sets as planned.
- - Azure Premium Storage is used for latency-sensitive disks or where the [single-VM SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_8/) is required.
- - Azure Write Accelerator is deployed correctly.
- - Make sure that, within the VMs, storage spaces, or stripe sets were built correctly across disks that need Write Accelerator.
- - Check the [configuration of software RAID on Linux](/previous-versions/azure/virtual-machines/linux/configure-raid).
- - Check the [configuration of LVM on Linux VMs in Azure](/previous-versions/azure/virtual-machines/linux/configure-lvm).
- - [Azure managed disks](https://azure.microsoft.com/services/managed-disks/) are used exclusively.
- - VMs were deployed into the correct availability sets and Availability Zones.
- - [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is enabled on the VMs used in the SAP application layer and the SAP DBMS layer.
- - No [Azure network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) are in the communication path between the SAP application and the DBMS layer of SAP systems based on SAP NetWeaver, Hybris, or S/4HANA.
- - Application security group and network security group rules allow communication as desired and planned and block communication where required.
- - Timeout settings are set correctly, as described earlier.
- - VMs are deployed to the correct [Azure proximity placement group](../../co-location.md), as described in [Azure proximity placement groups for optimal network latency with SAP applications](sap-proximity-placement-scenarios.md).
- - Network latency between SAP application layer VMs and DBMS VMs is tested and validated as described in SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Evaluate the results against the network latency guidance in [SAP support note #1100926](https://launchpad.support.sap.com/#/notes/1100926/E). The network latency should be in the moderate or good range. Exceptions apply to traffic between VMs and HANA Large Instance units, as documented in [this article](./hana-network-architecture.md#networking-architecture-for-hana-large-instance).
- - Encryption was implemented where necessary and with the appropriate encryption method.
- - Interfaces and other applications can connect the newly deployed infrastructure.
-1. Create a playbook for reacting to planned Azure maintenance. Determine the order in which systems and VMs should be rebooted for planned maintenance.
-
-
-## Go-live phase
+During this phase, you usually deploy development systems, unit testing systems, and business regression testing systems to Azure. We recommend that at least one non-production system in one SAP application line has the full high availability configuration that the future production system will have. Here are some tasks that you need to complete during this phase:
+
+- Before you move systems from the old platform to Azure, collect resource consumption data, like CPU usage, storage throughput, and IOPS data. Especially collect this data from the DBMS layer units, but also collect it from the application layer units. Also measure network and storage latency. Adapt your sizing and design with the captured data. Tools such as syststat, KSAR, [nmon](https://nmon.sourceforge.net/) and [nmon analyzer for Excel](https://nmon.sourceforge.net/pmwiki.php?n=Site.Nmon-Analyser) should be used to capture and present resource utilization over peak periods.
+- Record the availability usage time patterns of your systems. The goal is to figure out whether non-production systems need to be available all day, every day or whether there are non-production systems that can be shut down during certain phases of a week or month.
+- Reevaluate your OS image choice, VM generation (Generation 2 throughout the SAP landscape), and OS patch strategy.
+- Make sure to fulfill the SAP support requirements for Microsoft support agreements. See [SAP note 2015553](https://launchpad.support.sap.com/#/notes/2015553).
+- Check SAP notes related to Azure, like [note 1928533](https://launchpad.support.sap.com/#/notes/1928533), for new VM SKUs and newly supported OS and DBMS releases. Compare the pricing of new VM types against that of older VM types, so you can deploy VMs with the best price/performance ratio.
+- Recheck SAP support notes, the SAP HANA hardware directory, and the SAP PAM. Make sure there were no changes in supported VMs for Azure, supported OS releases on those VMs, and supported SAP and DBMS releases.
+- Check the [SAP website](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) for new HANA-certified SKUs in Azure. Compare the pricing of new SKUs with the ones you planned to use. Eventually, make necessary changes to use the ones that have the best price/performance ratio.
+- Adapt your deployment automation to use new VM types and incorporate new Azure features that you want to use.
+- After deployment of the infrastructure, test and evaluate the network latency between SAP application layer VMs and DBMS VMs, according to SAP notes [500235](https://launchpad.support.sap.com/#/notes/500235). Evaluate the results against the network latency guidance in [SAP note 1100926](https://launchpad.support.sap.com/#/notes/1100926). The network latency should be in the moderate or good range. In addition to using tools such as niping, [sockperf](https://github.com/Mellanox/sockperf) or [ethr](https://github.com/microsoft/ethr), use SAPΓÇÖs HCMT tool for network measurements between HANA VMs for scale-out or system replication.
+- Make sure that none of the restrictions mentioned in [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md#azure-network-considerations) and [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md) apply to your deployment.
+- Make sure your VMs are deployed to the correct availability zones. If using availability sets and seeing higher than expected latency between VMs, consult the article [Azure proximity placement groups for SAP applications](./sap-proximity-placement-scenarios.md) for correct usage.
+- Perform all the other checks listed for the proof-of-concept phase before applying the workload.
+- As the workload applies, record the resource consumption of the systems in Azure. Compare this consumption with records from your old platform. Adjust VM sizing of future deployments if you see that you have large differences. Keep in mind that when you downsize, storage, and network bandwidth of VMs will be reduced as well.
+ - [Sizes for Azure virtual machines](../../sizes.md)
+- Experiment with system copy functionality and processes. The goal is to make it easy for you to copy a development system or a test system, so project teams can get new systems quickly.
+- Optimize and hone your team's Azure role-based access, permissions, and processes to make sure you have separation of duties. At the same time, make sure all teams can perform their tasks in the Azure infrastructure.
+- Exercise, test, and document high-availability and disaster recovery procedures to enable your staff to execute these tasks. Identify shortcomings and adapt new Azure functionality that you're integrating into your deployments.
+
+## [Production phase](#tab/production)
+
+### Production preparation phase
+
+In this phase, collect what you experienced and learned during your non-production deployments and apply it to future production deployments.
+
+- Complete any necessary SAP release upgrades of your production systems before moving to Azure.
+- Agree with the business owners on functional and business tests that need to be conducted after migration of the production system.
+- Make sure these tests are completed with the source systems in the current hosting location. Avoid conducting tests for the first time after the system is moved to Azure.
+- Test the process of migrating production systems to Azure. If you're not moving all production systems to Azure during the same time frame, build groups of production systems that need to be at the same hosting location. Test data migration including connected non-SAP applications and interfaces.
+Here are some common methods:
+ - Use DBMS methods like backup/restore in combination with SQL Server Always On, HANA System Replication, or log shipping to seed and synchronize database content in Azure.
+ - Use backup/restore for smaller databases.
+ - Use the [SAP DMO](https://support.sap.com/en/tools/software-logistics-tools/software-update-manager/database-migration-option-dmo.html) process for supported scenarios to either move or if you need to combine your migration with an SAP release upgrade and/or move to HANA. Keep in mind that not all combinations of source DBMS and target DBMS are supported. You can find more information in the specific SAP support notes for the different releases of DMO. For example, [Database Migration Option (DMO) of SUM 2.0 SP15](https://launchpad.support.sap.com/#/notes/3206747).
+ - Test whether data transfer throughput is better through the internet or through ExpressRoute, in case you need to move backups or SAP export files. If you're moving data through the internet, you might need to change some of your network security group/application security group rules that you'll need to have in place for future production systems.
+- Before moving systems from your old platform to Azure, collect resource consumption data. Useful data includes CPU usage, storage throughput, and IOPS data. Especially collect this data from the DBMS layer units, but also collect it from the application layer units. Also measure network and storage latency.
+- Recheck SAP notes and the required OS settings, the SAP HANA hardware directory, and the SAP PAM. Make sure there were no changes in supported VMs for Azure, supported OS releases in those VMs, and supported SAP and DBMS releases.
+- Update your deployment automation to consider the latest decisions you've made on VM types and Azure functionality.
+- Create a playbook for reacting to planned Azure maintenance events. Determine the order in which systems and VMs should be rebooted for planned maintenance.
+- Exercise, test, and document high-availability and disaster recovery procedures to enable your staff to execute these tasks during migration and immediately after go-live decision.
+
+### Go-live phase
+ During the go-live phase, be sure to follow the playbooks you developed during earlier phases. Execute the steps that you tested and practiced. Don't accept last-minute changes in configurations and processes. Also complete these steps:
-1. Verify that Azure portal monitoring and other monitoring tools are working. We recommend Windows Performance Monitor (perfmon) for Windows and SAR for Linux.
- - CPU counters.
- - Average CPU time, total (all CPUs)
- - Average CPU time, each individual processor (128 processors on M128 VMs)
- - CPU kernel time, each individual processor
- - CPU user time, each individual processor
- - Memory.
- - Free memory
- - Memory page in/second
- - Memory page out/second
- - Disk.
- - Disk read in KBps, per individual disk
- - Disk reads/second, per individual disk
- - Disk read in microseconds/read, per individual disk
- - Disk write in KBps, per individual disk
- - Disk write/second, per individual disk
- - Disk write in microseconds/read, per individual disk
- - Network.
- - Network packets in/second
- - Network packets out/second
- - Network KB in/second
- - Network KB out/second
-1. After data migration, perform all the validation tests you agreed upon with the business owners. Accept validation test results only when you have results for the original source systems.
-1. Check whether interfaces are functioning and whether other applications can communicate with the newly deployed production systems.
-1. Check the transport and correction system through SAP transaction STMS.
-1. Perform database backups after the system is released for production.
-1. Perform VM backups for the SAP application layer VMs after the system is released for production.
-1. For SAP systems that weren't part of the current go-live phase but that communicate with the SAP systems that you moved to Azure during this go-live phase, you need to reset the host name buffer in SM51. Doing so will remove the old cached IP addresses associated with the names of the application instances you moved to Azure.
--
-## Post production
+- Verify that Azure portal monitoring and other monitoring tools are working. Use Azure tools such as [Azure Monitor](/azure/azure-monitor/overview) for infrastructure monitoring. [Azure Monitor for SAP](/azure/virtual-machines/workloads/sap/monitor-sap-on-azure) for a combination of OS and application KPIs, allowing you to integrate all in one dashboard for visibility during and after go-live.
+For operating system key performance indicators:
+ - [SAP note 1286256 - How-to: Using Windows LogMan tool to collect performance data on Windows Platforms](https://launchpad.support.sap.com/#/notes/1286256)
+ - On Linux ensure sysstat tool is installed and capturing details at regular intervals
+- After data migration, perform all the validation tests you agreed upon with the business owners. Accept validation test results only when you have results for the original source systems.
+- Check whether interfaces are functioning and whether other applications can communicate with the newly deployed production systems.
+- Check the transport and correction system through SAP transaction STMS.
+- Perform database backups after the system is released for production.
+- Perform VM backups for the SAP application layer VMs after the system is released for production.
+- For SAP systems that weren't part of the current go-live phase but that communicate with the SAP systems that you moved to Azure during this go-live phase, you need to reset the host name buffer in SM51. Doing so will remove the old cached IP addresses associated with the names of the application instances you moved to Azure.
+
+### Post production
+ This phase is about monitoring, operating, and administering the system. From an SAP point of view, the usual tasks that you were required to complete in your old hosting location apply. Complete these Azure-specific tasks as well:
+- Review Azure invoices for high-charging systems. Install a culture of finOps and build an Azure cost optimization capability in your organization.
+- Optimize price/performance efficiency on the VM side and the storage side.
+- Once your SAP on Azure has stabilized, your focus needs to shift to a culture of continuous sizing and capacity reviews. Unlike on-premises, where we size for a long period, right-sizing is a key benefit of running SAP on Azure workload, and diligent capacity planning will be key.
+- Optimize the times when you can shut down systems.
+- Once your solution has stabilized in Azure, consider moving away from a Pay-As-You-Go commercial model and leverage Azure Reserved Instances.
+- Plan and execute regular disaster recovery drills.
+- Define and implement your strategy around ΓÇÿever-greeneingΓÇÖ, to align your own roadmap with MicrosoftΓÇÖs SAP on Azure roadmap to gain benefit from the advancement of technology.
+
+## [Checklist](#tab/checklist)
+
+### SAP on Azure Infrastructure Checklist
+
+After deploying infrastructure and applications and before each migration starts, validate that:
+
+1. The correct VM types were deployed, with the correct attributes and storage configuration.
+2. The VMs are on an up to date OS, DBMS and SAP Kernel release and patch and the OS, DB and SAP Kernel uniform throughout the landscape
+3. VMs are secured and hardened as required and in a uniform way across the respective environment.
+4. VMs were deployed into Azure availability zones or availability sets as planned.
+5. Generation 2 VMs have been deployed. Generation 1 VMs should not be used for new deployments
+6. Azure Premium Storage or Premium Storage v2 is used for latency-sensitive disks or where the [single-VM SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_8/) is required.
+7. Make sure that, within the VMs, storage spaces, or [stripe sets were built correctly](./planning-guide-storage.md#striping-or-not-striping) across filesystems which require more than disk, such as DBMS data and log areas.
+8. Correct stripe size and filesystem blocksize are used, if noted in respective DBMS guides
+9. Azure VM storage and caching are used appropriately
+ - Make sure that only disks holding DBMS online logs are cached with None+ Write Accelerator.
+ - Other disks with premium storage are using cache settings none or ReadOnly, depending on use
+ - Check the [configuration of LVM on Linux VMs in Azure](/azure/virtual-machines/linux/configure-lvm).
+10. [Azure managed disks](https://azure.microsoft.com/services/managed-disks/) or [Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-solution-architectures#sap-on-azure-solutions) NFS volumes are used exclusively for DBMS VMs.
+11. For Azure NetApp Files, [correct mount options are used](/azure/azure-netapp-files/performance-linux-mount-options) and volumes are sized appropriately on correct storage tier.
+12. Using Azure services ΓÇô Azure Files or Azure NetApp Files ΓÇô for any SMB or NFS volumes or shares. NFS volumes or SMB shares are reachable by the respective SAP environment or individual SAP system(s). Network routing to the NFS/SMB server goes through private network address space, using private endpoint if needed.
+13. [Azure accelerated networking](/azure/virtual-network/accelerated-networking-overview) is enabled on every network interface for all SAP VMs.
+14. No [network virtual appliances](https://azure.microsoft.com/solutions/network-appliances/) are in the communication path between the SAP application and the DBMS layer of SAP systems based on SAP NetWeaver or ABAP Platform.
+15. All load balancers for SAP high-available components use standard load balancer with floating IP and HA ports enabled.
+16. SAP application and DBMS VM(s) are placed in same or different subnets of one virtual network or in virtual networks directly peered.
+17. Application and network security group rules allow communication as desired and planned, and block communication where required.
+18. Timeout settings are set correctly, as described earlier.
+19. If using proximity placement groups, check whether the availability sets and their VMs are deployed to the [correct PPG](./sap-proximity-placement-scenarios.md).
+20. Network latency between SAP application layer VMs and DBMS VMs is tested and validated as described in SAP notes [500235](https://launchpad.support.sap.com/#/notes/500235) and [1100926](https://launchpad.support.sap.com/#/notes/1100926). Evaluate the results against the network latency guidance in [SAP note 1100926](https://launchpad.support.sap.com/#/notes/1100926). The network latency should be in the moderate or good range.
+21. Encryption was implemented where necessary and with the appropriate encryption method.
+22. Own encryption keys are protected against loss, destruction, or malicious use.
+23. Interfaces and other applications can connect to the newly deployed infrastructure.
+++
+## Automated checks and insights in SAP landscape
+
+Several of the checks above are checked in automated way with [SAP on Azure Quality Check Tool](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/QualityCheck). These checks can be executed automated with the provided open-source project. While no automatic remediation of issues found is performed, the tool will warn about configuration against Microsoft recommendations.
-1. Review Azure invoices for high-charging systems.
-2. Optimize price/performance efficiency on the VM side and the storage side.
-3. Optimize the times when you can shut down systems.
+> [!TIP]
+> Same [quality checks and additional insights](/azure/center-sap-solutions/get-quality-checks-insights) are executed regularly when SAP systems are deployed or registered with [Azure Center for SAP solution](/azure/center-sap-solutions/) as well and are part of the service.
+Further tools to allow easier deployment checks and document findings, plan next remediation steps and generally optimize your SAP on Azure landscape are:
+- [Azure Well-Architected Framework review](/assessments/?id=azure-architecture-review&mode=pre-assessment) An assessment of your workload focusing on the five main pillars of reliability, security, cost optimization, operation excellence and performance efficiency. Supports SAP workloads and recommended to running a review at start and after every project phase.
+- [Azure Inventory Checks for SAP](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/Tools%26Framework/InventoryChecksForSAP) An open source Azure Monitor workbook, which shows your Azure inventory with intelligence to highlight configuration drift and improve quality.
## Next steps See these articles: -- [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)-- [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)-- [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md)
+> [!div class="checklist"]
+> * [Azure planning and implementation for SAP NetWeaver](./planning-guide.md)
+> * [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md)
+> * [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
When ready, you can issue the command to have your range advertised from Azure a
* A custom IP prefix must be associated with a single Azure region.
-* An IPv4 range can be equal or between /21 to /24. An IPv6 range can be equal or between /46 to /48.
+* An IPv4 range can be equal or between /21 to /24. An IPv6 range must be /48.
* Custom IP prefixes do not currently support derivation of IPs with Internet Routing Preference or that use Global Tier (for cross-region load-balancing).
-* In regions with [availability zones](../../availability-zones/az-overview.md), a custom IPv4 prefix (or a regional custom IPv6 prefix) must be specified as either zone-redundant or assigned to a specific zone. It can't be created with no zone specified in these regions. All IPs from the prefix must have the same zonal properties.
+* In regions with [availability zones](../../availability-zones/az-overview.md), a custom IPv4 prefix (or a regional custom prefix) must be specified as either zone-redundant or assigned to a specific zone. It can't be created with no zone specified in these regions. All IPs from the prefix must have the same zonal properties.
* The advertisements of IPs from a custom IP prefix over Azure ExpressRoute aren't currently supported.
When ready, you can issue the command to have your range advertised from Azure a
* IPs brought to Azure may have a delay up to 2 weeks before they can be used for Windows Server Activation.
+> [!IMPORTANT]
+> There are several differences between how custom IPv4 and IPv6 prefixes are onboarded and utilized. Please see [Differences between using BYOIPv4 and BYOIPv6](create-custom-ip-address-prefix-ipv6-powershell.md#differences-between-using-byoipv4-and-byoipv6) for more details.
+ ## Pricing * There is no charge to provision or use custom IP prefixes. There is no charge for any public IP prefixes and public IP addresses that are derived from custom IP prefixes.
When ready, you can issue the command to have your range advertised from Azure a
## Next steps -- To create a custom IP address prefix using the Azure portal, see [Create custom IPv4 address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md).
+- To create a custom IPv4 address prefix using the Azure portal, see [Create custom IPv4 address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md).
-- To create a custom IP address prefix using PowerShell, see [Create a custom IPv4 address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md).
+- To create a custom IPv4 address prefix using PowerShell, see [Create a custom IPv4 address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md).
- For more information about the management of a custom IP address prefix, see [Manage a custom IP address prefix](create-custom-ip-address-prefix-powershell.md).
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **GatewayManager** | Management traffic for deployments dedicated to Azure VPN Gateway and Application Gateway. | Inbound | No | No | | **GuestAndHybridManagement** | Azure Automation and Guest Configuration. | Outbound | No | Yes | | **HDInsight** | Azure HDInsight. | Inbound | Yes | No |
-| **Internet** | The IP address space that's outside the virtual network and reachable by the public internet.<br/><br/>This service tag only applies where the traffic does not hit any other service tag.<br/><br/>The address range includes the [Azure-owned public IP address space](https://www.microsoft.com/download/details.aspx?id=56519). | Both | No | No |
+| **Internet** | The IP address space that's outside the virtual network and reachable by the public internet.<br/><br/>The address range includes the [Azure-owned public IP address space](https://www.microsoft.com/download/details.aspx?id=56519). | Both | No | No |
| **LogicApps** | Logic Apps. | Both | No | No | | **LogicAppsManagement** | Management traffic for Logic Apps. | Inbound | No | No | | **M365ManagementActivityApi** | The Office 365 Management Activity API provides information about various user, admin, system, and policy actions and events from Office 365 and Azure Active Directory activity logs. Customers and partners can use this information to create new or enhance existing operations, security, and compliance-monitoring solutions for the enterprise.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory** tag. | Outbound | Yes | No |
vpn-gateway Azure Vpn Client Optional Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-optional-configurations.md
+
+ Title: 'Configure Azure VPN Client optional settings'
+
+description: Learn how to configure optional configuration settings for the Azure VPN Client. Settings include DNS suffixes, custom DNS servers, custom routes, and VPN client forced tunneling.
+++ Last updated : 11/22/2022+++
+# Azure VPN Client - configure optional DNS and routing settings
+
+This article helps you configure optional settings for the Azure VPN Client for VPN Gateway P2S connections. You can configure DNS suffixes, custom DNS servers, custom routes, and VPN client-side forced tunneling.
+
+> [!NOTE]
+> The Azure VPN Client is only supported for OpenVPN® protocol connections.
+>
+
+## Before you begin
+
+If you haven't already done so, make sure you complete the following items:
+
+* Generate and download the VPN client profile configuration files for your P2S deployment. Use the following steps:
+
+ 1. In the Azure portal, go to the virtual network gateway.
+ 1. Click **Point-to-Site configuration**.
+ 1. Click **Download VPN client**.
+ 1. Select the client and fill out any information that is requested.
+ 1. Click **Download** to generate the .zip file.
+ 1. The .zip file will download, typically to your Downloads folder.
+
+* Download and install the Azure VPN Client. For steps, see one of the following articles:
+
+ * [Certificate authentication](point-to-site-vpn-client-cert-windows.md#download-the-azure-vpn-client)
+ * [Azure AD authentication](openvpn-azure-ad-client.md#download)
+
+## Working with VPN client profile configuration files
+
+The steps in this article require you to modify and import the Azure VPN Client profile configuration file. To work with VPN client profile configuration files (xml files), do the following:
+
+1. Locate the profile configuration file and open it using the editor of your choice.
+1. Using the examples in the sections below, modify the file as necessary, then save your changes.
+1. Import the file to configure the Azure VPN client. You can import the file for the Azure VPN Client using these methods:
+
+ * **Azure VPN Client interface**: Open the Azure VPN Client and click **+** and then **Import**. Locate the modified xml file, configure any additional settings in the Azure VPN Client interface (if necessary), then click **Save**.
+
+ * **Command-line prompt**: Place the downloaded *azurevpnconfig.xml* file in the *%userprofile%\AppData\Local\Packages\Microsoft.AzureVpn_8wekyb3d8bbwe\LocalState* folder, then run the following command: `azurevpn -i azurevpnconfig.xml`. To force the import, use the **-f** switch.
+
+## DNS
+
+### Add DNS suffixes
+
+To add DNS suffixes, modify the downloaded profile XML file and add the **\<dnssuffixes>\<dnssufix> \</dnssufix>\</dnssuffixes>** tags.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+
+ <dnssuffixes>
+ <dnssuffix>.mycorp.com</dnssuffix>
+ <dnssuffix>.xyz.com</dnssuffix>
+ <dnssuffix>.etc.net</dnssuffix>
+ </dnssuffixes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+### Add custom DNS servers
+
+To add custom DNS servers, modify the downloaded profile XML file and add the **\<dnsservers>\<dnsserver> \</dnsserver>\</dnsservers>** tags.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+
+ <dnsservers>
+ <dnsserver>x.x.x.x</dnsserver>
+ <dnsserver>y.y.y.y</dnsserver>
+ </dnsservers>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+> [!NOTE]
+> The OpenVPN Azure AD client utilizes DNS Name Resolution Policy Table (NRPT) entries, which means DNS servers will not be listed under the output of `ipconfig /all`. To confirm your in-use DNS settings, please consult [Get-DnsClientNrptPolicy](/powershell/module/dnsclient/get-dnsclientnrptpolicy) in PowerShell.
+>
+
+## Routing
+
+### Split tunneling
+
+Split tunneling is configured by default for the VPN client.
+
+### Forced tunneling
+
+You can configure forced tunneling in order to direct all traffic to the VPN tunnel. Forced tunneling can be configured using two different methods; either by advertising custom routes, or by modifying the profile XML file. You can include 0/0 if you're using the Azure VPN Client version 2.1900:39.0 or higher.
+
+> [!NOTE]
+> Internet connectivity is not provided through the VPN gateway. As a result, all traffic bound for the Internet is dropped.
+>
+
+* **Advertise custom routes:** You can advertise custom routes `0.0.0.0/1` and `128.0.0.0/1`. For more information, see [Advertise custom routes for P2S VPN clients](vpn-gateway-p2s-advertise-custom-routes.md).
+
+* **Profile XML:** You can modify the downloaded profile xml file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags. Make sure to update the version number to **2**.
+
+ ```xml
+ <azvpnprofile>
+ <clientconfig>
+
+ <includeroutes>
+ <route>
+ <destination>0.0.0.0</destination><mask>1</mask>
+ </route>
+ <route>
+ <destination>128.0.0.0</destination><mask>1</mask>
+ </route>
+ </includeroutes>
+
+ </clientconfig>
+ </azvpnprofile>
+ ```
+
+### Add custom routes
+
+You can add custom routes. Modify the downloaded profile XML file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+
+ <includeroutes>
+ <route>
+ <destination>x.x.x.x</destination><mask>24</mask>
+ </route>
+ </includeroutes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+### Block (exclude) routes
+
+You block (exclude) routes. Modify the downloaded profile XML file and add the **\<excluderoutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</excluderoutes>** tags.
+
+```xml
+<azvpnprofile>
+<clientconfig>
+
+ <excluderoutes>
+ <route>
+ <destination>x.x.x.x</destination><mask>24</mask>
+ </route>
+ </excluderoutes>
+
+</clientconfig>
+</azvpnprofile>
+```
+
+> [!NOTE]
+> - The default status for clientconfig tag is <clientconfig i:nil="true" />, which can be modified based on the requirement.
+> - Duplicate clientconfig tag is not supported on macOS, so make sure the clientconfig tag is not duplicated in the XML file.
+>
+
+## Next steps
+
+For more information about P2S VPN, see the following articles:
+
+* [About point-to-site VPN](point-to-site-about.md)
+* [About point-to-site VPN routing](vpn-gateway-about-point-to-site-routing.md)
+
vpn-gateway Openvpn Azure Ad Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client-mac.md
Previously updated : 09/30/2021 Last updated : 11/22/2022
-# Configure an Azure VPN Client - Azure AD authentication - macOS
+# Configure the Azure VPN Client - Azure AD authentication - macOS
This article helps you configure a VPN client for a computer running macOS 10.15 and later to connect to a virtual network using Point-to-Site VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md). For more information about Point-to-Site connections, see [About Point-to-Site connections](point-to-site-about.md).
This article helps you configure a VPN client for a computer running macOS 10.15
> For every computer that you want to connect to a VNet using a Point-to-Site VPN connection, you need to do the following:
-
+ * Download the Azure VPN Client to the computer. * Configure a client profile that contains the VPN settings.
If you want to configure multiple computers, you can create a client profile on
Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md).
-## <a name="download"></a>To download the Azure VPN client
+## Download the Azure VPN Client
1. Download the [Azure VPN Client](https://apps.apple.com/us/app/azure-vpn-client/id1553936137) from the Apple Store. 1. Install the client on your computer.
-## <a name="import"></a>To import a connection profile
+## Generate VPN client profile configuration files
+
+1. To generate the VPN client profile configuration package, see [Working with P2S VPN client profile files](about-vpn-profile-download.md).
+1. Download and extract the VPN client profile configuration files.
+
+## Import VPN client profile configuration files
-1. Download and extract the profile files. For steps, see [Working with VPN client profile files](about-vpn-profile-download.md).
1. On the Azure VPN Client page, select **Import**. :::image type="content" source="media/openvpn-azure-ad-client-mac/import-1.png" alt-text="Screenshot of Azure VPN Client import selection.":::
Before you can connect and authenticate using Azure AD, you must first configure
:::image type="content" source="media/openvpn-azure-ad-client-mac/import-5.png" alt-text="Screenshot of Azure VPN Client connected status and disconnect button.":::
-## <a name="manual"></a>To create a connection manually
+## To create a connection manually
1. Open the Azure VPN Client. Select **Add** to create a new connection.
Before you can connect and authenticate using Azure AD, you must first configure
:::image type="content" source="media/openvpn-azure-ad-client-mac/add-5.png" alt-text="Screenshot of Azure VPN Client connected and disconnect button.":::
-## <a name="remove"></a>To remove a connection profile
+## To remove a VPN connection profile
-You can remove the VPN connection profile from your computer.
+You can remove the VPN connection profile from your computer.
1. Navigate to the Azure VPN Client. 1. Select the VPN connection that you want to remove, click the dropdown, and select **Remove**.
You can remove the VPN connection profile from your computer.
1. On the **Remove VPN connection?** box, click **Remove**. :::image type="content" source="media/openvpn-azure-ad-client-mac/remove-2.png" alt-text="Screenshot of removing.":::
-## FAQ
-
-### How do I add DNS suffixes to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<dnssuffixes>\<dnssufix> \</dnssufix>\</dnssuffixes>** tags.
-
-```
-<azvpnprofile>
-<clientconfig>
-
- <dnssuffixes>
- <dnssuffix>.mycorp.com</dnssuffix>
- <dnssuffix>.xyz.com</dnssuffix>
- <dnssuffix>.etc.net</dnssuffix>
- </dnssuffixes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### How do I add custom DNS servers to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<dnsservers>\<dnsserver> \</dnsserver>\</dnsservers>** tags.
-
-```
-<azvpnprofile>
-<clientconfig>
-
- <dnsservers>
- <dnsserver>x.x.x.x</dnsserver>
- <dnsserver>y.y.y.y</dnsserver>
- </dnsservers>
-
-</clientconfig>
-</azvpnprofile>
-```
+## Optional Azure VPN Client configuration settings
-### <a name="split"></a>Can I configure split tunneling for the VPN client?
-
-Split tunneling is configured by default for the VPN client.
-
-### <a name="forced-tunnel"></a>How do I direct all traffic to the VPN tunnel (forced tunneling)?
-
-You can configure forced tunneling using two different methods; either by advertising custom routes, or by modifying the profile XML file.
-
-> [!NOTE]
-> Internet connectivity is not provided through the VPN gateway. As a result, all traffic bound for the Internet is dropped.
->
-
-* **Advertise custom routes:** You can advertise custom routes 0.0.0.0/1 and 128.0.0.0/1. For more information, see [Advertise custom routes for P2S VPN clients](vpn-gateway-p2s-advertise-custom-routes.md).
-
-* **Profile XML:** You can modify the downloaded profile XML file to add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
--
- ```
- <azvpnprofile>
- <clientconfig>
-
- <includeroutes>
- <route>
- <destination>0.0.0.0</destination><mask>1</mask>
- </route>
- <route>
- <destination>128.0.0.0</destination><mask>1</mask>
- </route>
- </includeroutes>
-
- </clientconfig>
- </azvpnprofile>
- ```
--
-### How do I add custom routes to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
-
-```
-<azvpnprofile>
-<clientconfig>
-
- <includeroutes>
- <route>
- <destination>x.x.x.x</destination><mask>24</mask>
- </route>
- </includeroutes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### How do I block (exclude) routes from the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<excluderoutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</excluderoutes>** tags.
-
-```
-<azvpnprofile>
-<clientconfig>
-
- <excluderoutes>
- <route>
- <destination>x.x.x.x</destination><mask>24</mask>
- </route>
- </excluderoutes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-> [!NOTE]
-> - The default status for clientconfig tag is <clientconfig i:nil="true" />, which can be modified based on the requirement.
-> - Duplicate clientconfig tag is not supported on macOS, so make sure the clientconfig tag is not duplicated in the XML file.
->
+You can configure the Azure VPN Client with optional configuration settings such as additional DNS servers, custom DNS, forced tunneling, custom routes, and other additional settings. For a description of the available optional settings and configuration steps, see [Azure VPN Client optional settings](azure-vpn-client-optional-configurations.md).
## Next steps
vpn-gateway Openvpn Azure Ad Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client.md
Previously updated : 05/05/2022 Last updated : 11/22/2022
-# Configure an Azure VPN Client - Azure AD authentication - Windows
+# Configure the Azure VPN Client - Azure AD authentication - Windows
-This article helps you configure the Azure VPN Client on a Windows computer to connect to a virtual network using a VPN Gateway point-to-site VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md). For more information about point-to-site, see [About point-to-site VPN](point-to-site-about.md).
+This article helps you configure the Azure VPN Client on a Windows computer to connect to a virtual network using a VPN Gateway point-to-site (P2S) VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md). For more information about point-to-site, see [About point-to-site VPN](point-to-site-about.md). The Azure VPN Client supported with Windows FIPS mode with the [KB4577063](https://support.microsoft.com/help/4577063/windows-10-update-kb4577063) hotfix.
[!INCLUDE [OpenVPN note](../../includes/vpn-gateway-openvpn-auth-include.md)] ## <a name="workflow"></a>Workflow
-After your Azure VPN Gateway point-to-site configuration is complete, your next steps are as follows:
+After your Azure VPN Gateway P2S configuration is complete, your next steps are as follows:
1. Download and install the Azure VPN Client. 1. Generate the VPN client profile configuration package.
After your Azure VPN Gateway point-to-site configuration is complete, your next
1. Create a connection. 1. Optional - export the profile settings from the client and import to other client computers. - ## <a name="download"></a>Download the Azure VPN Client [!INCLUDE [Download Azure VPN Client](../../includes/vpn-gateway-download-vpn-client.md)]
-## <a name="generate"></a>Generate the VPN client profile configuration package
+## <a name="generate"></a>Generate VPN client profile configuration files
-To generate the VPN client profile configuration package, see [Working with P2S VPN client profile files](about-vpn-profile-download.md). After you generate the package, follow the steps to extract the profile configuration files.
+1. To generate the VPN client profile configuration package, see [Working with P2S VPN client profile files](about-vpn-profile-download.md).
+1. Download and extract the VPN client profile configuration files.
-## <a name="import"></a>Import the profile file
+## <a name="import"></a>Import VPN client profile configuration files
For Azure AD authentication configurations, the **azurevpnconfig.xml** is used. The file is located in the **AzureVPN** folder of the VPN client profile configuration package.
Once you have a working profile and need to distribute it to other users, you ca
![diagnose](./media/openvpn-azure-ad-client/diagnose/diagnose4.jpg)
-## FAQ
-
-### Is the Azure VPN Client supported with Windows FIPS mode?
-
-Yes, with the [KB4577063](https://support.microsoft.com/help/4577063/windows-10-update-kb4577063) hotfix.
-
-### How do I add DNS suffixes to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<dnssuffixes>\<dnssufix> \</dnssufix>\</dnssuffixes>** tags.
-
-```
-<azvpnprofile>
-<clientconfig>
-
- <dnssuffixes>
- <dnssuffix>.mycorp.com</dnssuffix>
- <dnssuffix>.xyz.com</dnssuffix>
- <dnssuffix>.etc.net</dnssuffix>
- </dnssuffixes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### How do I add custom DNS servers to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<dnsservers>\<dnsserver> \</dnsserver>\</dnsservers>** tags.
-
-```
-<azvpnprofile>
-<clientconfig>
-
- <dnsservers>
- <dnsserver>x.x.x.x</dnsserver>
- <dnsserver>y.y.y.y</dnsserver>
- </dnsservers>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-> [!NOTE]
-> The OpenVPN Azure AD client utilizes DNS Name Resolution Policy Table (NRPT) entries, which means DNS servers will not be listed under the output of `ipconfig /all`. To confirm your in-use DNS settings, please consult [Get-DnsClientNrptPolicy](/powershell/module/dnsclient/get-dnsclientnrptpolicy) in PowerShell.
->
-
-### <a name="split"></a>Can I configure split tunneling for the VPN client?
-
-Split tunneling is configured by default for the VPN client.
-
-### <a name="forced-tunnel"></a>How do I direct all traffic to the VPN tunnel (forced tunneling)?
-
-You can configure forced tunneling using two different methods; either by advertising custom routes, or by modifying the profile XML file.
-
-> [!NOTE]
-> Internet connectivity is not provided through the VPN gateway. As a result, all traffic bound for the Internet is dropped.
->
-
-* **Advertise custom routes:** You can advertise custom routes 0.0.0.0/1 and 128.0.0.0/1. For more information, see [Advertise custom routes for P2S VPN clients](vpn-gateway-p2s-advertise-custom-routes.md).
-
-* **Profile XML:** You can modify the downloaded profile XML file to add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
--
- ```
- <azvpnprofile>
- <clientconfig>
-
- <includeroutes>
- <route>
- <destination>0.0.0.0</destination><mask>1</mask>
- </route>
- <route>
- <destination>128.0.0.0</destination><mask>1</mask>
- </route>
- </includeroutes>
-
- </clientconfig>
- </azvpnprofile>
- ```
--
-### How do I add custom routes to the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
-
-```
-<azvpnprofile>
-<clientconfig>
-
- <includeroutes>
- <route>
- <destination>x.x.x.x</destination><mask>24</mask>
- </route>
- </includeroutes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### How do I block (exclude) routes from the VPN client?
-
-You can modify the downloaded profile XML file and add the **\<excluderoutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</excluderoutes>** tags.
-
-```
-<azvpnprofile>
-<clientconfig>
-
- <excluderoutes>
- <route>
- <destination>x.x.x.x</destination><mask>24</mask>
- </route>
- </excluderoutes>
-
-</clientconfig>
-</azvpnprofile>
-```
-
-### Can I import the profile from a command-line prompt?
-
-You can import the profile from a command-line prompt by placing the downloaded **azurevpnconfig.xml** file in the **%userprofile%\AppData\Local\Packages\Microsoft.AzureVpn_8wekyb3d8bbwe\LocalState** folder and running the following command:
-
-```
-azurevpn -i azurevpnconfig.xml
-```
-To force the import, use the **-f** switch.
+## Optional Azure VPN Client configuration settings
+You can configure the Azure VPN Client with optional configuration settings such as additional DNS servers, custom DNS, forced tunneling, custom routes, and other additional settings. For a description of the available optional settings and configuration steps, see [Azure VPN Client optional settings](azure-vpn-client-optional-configurations.md).
## Next steps
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
description: Learn how to configure VPN clients for P2S configurations that use
Previously updated : 10/12/2022 Last updated : 11/22/2022
When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azu
:::image type="content" source="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png" alt-text="Screenshot showing Azure VPN client profile configuration page." lightbox="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png":::
- If you don't see a client certificate in the **Certificate Information** dropdown, you'll need cancel the profile configuration import and fix the issue before proceeding. It's possible that one of the following things is true:
+ If you don't see a client certificate in the **Certificate Information** dropdown, you'll need to cancel the profile configuration import and fix the issue before proceeding. It's possible that one of the following things is true:
* The client certificate isn't installed locally on the client computer. * There are multiple certificates with exactly the same name installed on your local computer (common in test environments).
When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azu
1. In the left pane, locate the **VPN connection**, then click **Connect**.
-Azure VPN client provides high availability by allowing you to add a secondary VPN client profile, providing a more resilient way to access VPN. You can choose to add a secondary client profile using any of the already imported client profiles and that **enables the high availability** option for windows. In case of any **region outage** or failure to connect to the primary VPN client profile, Azure VPN provides the capability to auto-connect to the secondary client profile without causing any disruptions. This setting requires the Azure VPN Client version 2.2124.51.0, which is currently in the process of being rolled out.
+#### Secondary VPN client profile
+
+Azure VPN client provides high availability by allowing you to add a secondary VPN client profile, providing a more resilient way to access VPN. You can choose to add a secondary client profile using any of the already imported client profiles and that **enables the high availability** option for windows. In case of any **region outage** or failure to connect to the primary VPN client profile, Azure VPN provides the capability to auto-connect to the secondary client profile without causing any disruptions. This setting requires the Azure VPN Client version **2.2124.51.0**, which is currently in the process of being rolled out.
+
+#### Optional Azure VPN Client configuration settings
+
+You can configure the Azure VPN Client with optional configuration settings such as additional DNS servers, custom DNS, forced tunneling, custom routes, and other additional settings. For a description of the available optional settings and configuration steps, see [Azure VPN Client optional settings](azure-vpn-client-optional-configurations.md).
## <a name="openvpn"></a>OpenVPN - OpenVPN Client steps