Updates from: 03/31/2023 01:19:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
Your Azure Active Directory B2C (Azure AD B2C) directory user profile comes with
Most of the attributes that can be used with Azure AD B2C user profiles are also supported by Microsoft Graph. This article describes supported Azure AD B2C user profile attributes. It also notes those attributes that are not supported by Microsoft Graph, as well as Microsoft Graph attributes that should not be used with Azure AD B2C. > [!IMPORTANT]
-> You should'nt use built-in or extension attributes to store sensitive personal data, such as account credentials, government identification numbers, cardholder data, financial account data, healthcare information, or sensitive background information.
+> You shouldn't use built-in or extension attributes to store sensitive personal data, such as account credentials, government identification numbers, cardholder data, financial account data, healthcare information, or sensitive background information.
You can also integrate with external systems. For example, you can use Azure AD B2C for authentication, but delegate to an external customer relationship management (CRM) or customer loyalty database as the authoritative source of customer data. For more information, see the [remote profile](https://github.com/azure-ad-b2c/samples/tree/master/policies/remote-profile) solution.
active-directory-domain-services Join Centos Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-centos-linux-vm.md
Once the VM is deployed, follow the steps to connect to the VM using SSH.
To make sure that the VM host name is correctly configured for the managed domain, edit the */etc/hosts* file and set the hostname:
-```console
+```bash
sudo vi /etc/hosts ```
In the *hosts* file, update the *localhost* address. In the following example:
Update these names with your own values:
-```console
+```config
127.0.0.1 centos.aaddscontoso.com centos ```
When done, save and exit the *hosts* file using the `:wq` command of the editor.
The VM needs some additional packages to join the VM to the managed domain. To install and configure these packages, update and install the domain-join tools using `yum`:
-```console
+```bash
sudo yum install adcli realmd sssd krb5-workstation krb5-libs oddjob oddjob-mkhomedir samba-common-tools ```
Now that the required packages are installed on the VM, join the VM to the manag
1. Use the `realm discover` command to discover the managed domain. The following example discovers the realm *AADDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
- ```console
+ ```bash
sudo realm discover AADDSCONTOSO.COM ```
Now that the required packages are installed on the VM, join the VM to the manag
Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
- ```console
- kinit contosoadmin@AADDSCONTOSO.COM
+ ```bash
+ sudo kinit contosoadmin@AADDSCONTOSO.COM
``` 1. Finally, join the VM to the managed domain using the `realm join` command. Use the same user account that's a part of the managed domain that you specified in the previous `kinit` command, such as `contosoadmin@AADDSCONTOSO.COM`:
- ```console
+ ```bash
sudo realm join --verbose AADDSCONTOSO.COM -U 'contosoadmin@AADDSCONTOSO.COM' --membership-software=adcli ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. Open the *sshd_conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ssh/sshd_config ``` 1. Update the line for *PasswordAuthentication* to *yes*:
- ```console
+ ```bash
PasswordAuthentication yes ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. To apply the changes and let users sign in using a password, restart the SSH service:
- ```console
+ ```bash
sudo systemctl restart sshd ```
To grant members of the *AAD DC Administrators* group administrative privileges
1. Open the *sudoers* file for editing:
- ```console
+ ```bash
sudo visudo ``` 1. Add the following entry to the end of */etc/sudoers* file. The *AAD DC Administrators* group contains whitespace in the name, so include the backslash escape character in the group name. Add your own domain name, such as *aaddscontoso.com*:
- ```console
+ ```config
# Add 'AAD DC Administrators' group members as admins. %AAD\ DC\ Administrators@aaddscontoso.com ALL=(ALL) NOPASSWD:ALL ```
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *centos.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name.
- ```console
- ssh -l contosoadmin@AADDSCONTOSO.com centos.aaddscontoso.com
+ ```bash
+ sudo ssh -l contosoadmin@AADDSCONTOSO.com centos.aaddscontoso.com
``` 1. When you've successfully connected to the VM, verify that the home directory was initialized correctly:
- ```console
- pwd
+ ```bash
+ sudo pwd
``` You should be in the */home* directory with your own directory that matches the user account. 1. Now check that the group memberships are being resolved correctly:
- ```console
- id
+ ```bash
+ sudo id
``` You should see your group memberships from the managed domain. 1. If you signed in to the VM as a member of the *AAD DC Administrators* group, check that you can correctly use the `sudo` command:
- ```console
+ ```bash
sudo yum update ```
active-directory-domain-services Join Rhel Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-rhel-linux-vm.md
Once the VM is deployed, follow the steps to connect to the VM using SSH.
To make sure that the VM host name is correctly configured for the managed domain, edit the */etc/hosts* file and set the hostname:
-```console
+```bash
sudo vi /etc/hosts ```
In the *hosts* file, update the *localhost* address. In the following example:
Update these names with your own values:
-```console
+```config
127.0.0.1 rhel rhel.aaddscontoso.com ``` When done, save and exit the *hosts* file using the `:wq` command of the editor.
-## Install required packages
-The VM needs some additional packages to join the VM to the managed domain. To install and configure these packages, update and install the domain-join tools using `yum`. There are some differences between RHEL 7.x and RHEL 6.x, so use the appropriate commands for your distro version in the remaining sections of this article.
+# [RHEL 6](#tab/rhel)
-**RHEL 7**
-```console
-sudo yum install realmd sssd krb5-workstation krb5-libs oddjob oddjob-mkhomedir samba-common-tools
-```
+> [!IMPORTANT]
+> Keep in consideration Red Hat Enterprise Linux 6.X and Oracle Linux 6.x is already EOL.
+> RHEL 6.10 has available [ELS support](https://www.redhat.com/en/resources/els-datasheet), which [will end on 06/2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
-**RHEL 6**
+## Install required packages
-```console
+The VM needs some additional packages to join the VM to the managed domain. To install and configure these packages, update and install the domain-join tools using `yum`.
+
+```bash
sudo yum install adcli sssd authconfig krb5-workstation ```- ## Join VM to the managed domain
-Now that the required packages are installed on the VM, join the VM to the managed domain. Again, use the appropriate steps for your RHEL distro version.
-
-### RHEL 7
-
-1. Use the `realm discover` command to discover the managed domain. The following example discovers the realm *AADDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
-
- ```console
- sudo realm discover AADDSCONTOSO.COM
- ```
-
- If the `realm discover` command can't find your managed domain, review the following troubleshooting steps:
-
- * Make sure that the domain is reachable from the VM. Try `ping aaddscontoso.com` to see if a positive reply is returned.
- * Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available.
- * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain.
-
-1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Azure AD](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
-
- Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
-
- ```console
- kinit contosoadmin@AADDSCONTOSO.COM
- ```
-
-1. Finally, join the VM to the managed domain using the `realm join` command. Use the same user account that's a part of the managed domain that you specified in the previous `kinit` command, such as `contosoadmin@AADDSCONTOSO.COM`:
-
- ```console
- sudo realm join --verbose AADDSCONTOSO.COM -U 'contosoadmin@AADDSCONTOSO.COM'
- ```
-
-It takes a few moments to join the VM to the managed domain. The following example output shows the VM has successfully joined to the managed domain:
-
-```output
-Successfully enrolled machine in realm
-```
-
-### RHEL 6
+Now that the required packages are installed on the VM, join the VM to the managed domain.
1. Use the `adcli info` command to discover the managed domain. The following example discovers the realm *ADDDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
- ```console
+ ```bash
sudo adcli info aaddscontoso.com ```- If the `adcli info` command can't find your managed domain, review the following troubleshooting steps: * Make sure that the domain is reachable from the VM. Try `ping aaddscontoso.com` to see if a positive reply is returned.
Successfully enrolled machine in realm
1. First, join the domain using the `adcli join` command, this command also creates the keytab to authenticate the machine. Use a user account that's a part of the managed domain.
- ```console
+ ```bash
sudo adcli join aaddscontoso.com -U contosoadmin ``` 1. Now configure the `/ect/krb5.conf` and create the `/etc/sssd/sssd.conf` files to use the `aaddscontoso.com` Active Directory domain. Make sure that `AADDSCONTOSO.COM` is replaced by your own domain name:
- Open the `/ect/krb5.conf` file with an editor:
+ Open the `/etc/krb5.conf` file with an editor:
- ```console
+ ```bash
sudo vi /etc/krb5.conf ``` Update the `krb5.conf` file to match the following sample:
- ```console
+ ```config
[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log
Successfully enrolled machine in realm
Create the `/etc/sssd/sssd.conf` file:
- ```console
+ ```bash
sudo vi /etc/sssd/sssd.conf ``` Update the `sssd.conf` file to match the following sample:
- ```console
+ ```config
[sssd] services = nss, pam, ssh, autofs config_file_version = 2
Successfully enrolled machine in realm
1. Make sure `/etc/sssd/sssd.conf` permissions are 600 and is owned by root user:
- ```console
+ ```bash
sudo chmod 600 /etc/sssd/sssd.conf sudo chown root:root /etc/sssd/sssd.conf ``` 1. Use `authconfig` to instruct the VM about the AD Linux integration:
- ```console
- sudo authconfig --enablesssd --enablesssdauth --update
+ ```bash
+ sudo authconfig --enablesssd --enablesssd auth --update
``` 1. Start and enable the sssd service:
- ```console
+ ```bash
sudo service sssd start sudo chkconfig sssd on ```
If your VM can't successfully complete the domain-join process, make sure that t
Now check if you can query user AD information using `getent`
-```console
+```bash
sudo getent passwd contosoadmin ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. Open the *sshd_conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ssh/sshd_config ``` 1. Update the line for *PasswordAuthentication* to *yes*:
- ```console
+ ```config
PasswordAuthentication yes ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. To apply the changes and let users sign in using a password, restart the SSH service for your RHEL distro version:
- **RHEL 7**
+ ```bash
+ sudo service sshd restart
+ ```
+
- ```console
- sudo systemctl restart sshd
+# [RHEL 7](#tab/rhel7)
+
+## Install required packages
+
+The VM needs some additional packages to join the VM to the managed domain. To install and configure these packages, update and install the domain-join tools using `yum`.
+
+```bash
+sudo yum install realmd sssd krb5-workstation krb5-libs oddjob oddjob-mkhomedir samba-common-tools
+```
+## Join VM to the managed domain
+
+Now that the required packages are installed on the VM, join the VM to the managed domain. Again, use the appropriate steps for your RHEL distro version.
+
+1. Use the `realm discover` command to discover the managed domain. The following example discovers the realm *AADDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
+
+ ```bash
+ sudo realm discover AADDSCONTOSO.COM
```
- **RHEL 6**
+ If the `realm discover` command can't find your managed domain, review the following troubleshooting steps:
- ```console
- sudo service sshd restart
+ * Make sure that the domain is reachable from the VM. Try `ping aaddscontoso.com` to see if a positive reply is returned.
+ * Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available.
+ * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain.
+
+1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Azure AD](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
+
+ Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
+
+ ```bash
+ sudo kinit contosoadmin@AADDSCONTOSO.COM
+ ```
+
+1. Finally, join the VM to the managed domain using the `realm join` command. Use the same user account that's a part of the managed domain that you specified in the previous `kinit` command, such as `contosoadmin@AADDSCONTOSO.COM`:
+
+ ```bash
+ sudo realm join --verbose AADDSCONTOSO.COM -U 'contosoadmin@AADDSCONTOSO.COM'
+ ```
+
+It takes a few moments to join the VM to the managed domain. The following example output shows the VM has successfully joined to the managed domain:
+
+```output
+Successfully enrolled machine in realm
+```
+
+## Allow password authentication for SSH
+
+By default, users can only sign in to a VM using SSH public key-based authentication. Password-based authentication fails. When you join the VM to a managed domain, those domain accounts need to use password-based authentication. Update the SSH configuration to allow password-based authentication as follows.
+
+1. Open the *sshd_conf* file with an editor:
+
+ ```bash
+ sudo vi /etc/ssh/sshd_config
```
+1. Update the line for *PasswordAuthentication* to *yes*:
+
+ ```bash
+ PasswordAuthentication yes
+ ```
+
+ When done, save and exit the *sshd_conf* file using the `:wq` command of the editor.
+
+1. To apply the changes and let users sign in using a password, restart the SSH service.
+
+ ```bash
+ sudo systemctl restart sshd
+ ```
++ ## Grant the 'AAD DC Administrators' group sudo privileges To grant members of the *AAD DC Administrators* group administrative privileges on the RHEL VM, you add an entry to the */etc/sudoers*. Once added, members of the *AAD DC Administrators* group can use the `sudo` command on the RHEL VM. 1. Open the *sudoers* file for editing:
- ```console
+ ```bash
sudo visudo ``` 1. Add the following entry to the end of */etc/sudoers* file. The *AAD DC Administrators* group contains whitespace in the name, so include the backslash escape character in the group name. Add your own domain name, such as *aaddscontoso.com*:
- ```console
+ ```config
# Add 'AAD DC Administrators' group members as admins. %AAD\ DC\ Administrators@aaddscontoso.com ALL=(ALL) NOPASSWD:ALL ```
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *rhel.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name.
- ```console
- ssh -l contosoadmin@AADDSCONTOSO.com rhel.aaddscontoso.com
+ ```bash
+ sudo ssh -l contosoadmin@AADDSCONTOSO.com rhel.aaddscontoso.com
``` 1. When you've successfully connected to the VM, verify that the home directory was initialized correctly:
- ```console
- pwd
+ ```bash
+ sudo pwd
``` You should be in the */home* directory with your own directory that matches the user account. 1. Now check that the group memberships are being resolved correctly:
- ```console
- id
+ ```bash
+ sudo id
``` You should see your group memberships from the managed domain. 1. If you signed in to the VM as a member of the *AAD DC Administrators* group, check that you can correctly use the `sudo` command:
- ```console
+ ```bash
sudo yum update ```
active-directory-domain-services Join Suse Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-suse-linux-vm.md
Once the VM is deployed, follow the steps to connect to the VM using SSH.
To make sure that the VM host name is correctly configured for the managed domain, edit the */etc/hosts* file and set the hostname:
-```console
+```bash
sudo vi /etc/hosts ```
In the *hosts* file, update the *localhost* address. In the following example:
Update these names with your own values:
-```console
+```config
127.0.0.1 linux-q2gr linux-q2gr.aaddscontoso.com ```
To join the managed domain using **winbind** and the *YaST command line interfac
* Join the domain:
- ```console
+ ```bash
sudo yast samba-client joindomain domain=aaddscontoso.com user=<admin> password=<admin password> machine=<(optional) machine account> ```
To join the managed domain using **winbind** and the *`samba net` command*:
1. Install kerberos client and samba-winbind:
- ```console
+ ```bash
sudo zypper in krb5-client samba-winbind ```
To join the managed domain using **winbind** and the *`samba net` command*:
* /etc/samba/smb.conf
- ```ini
+ ```config
[global] workgroup = AADDSCONTOSO usershare allow guests = NO #disallow guests from sharing
To join the managed domain using **winbind** and the *`samba net` command*:
* /etc/krb5.conf
- ```ini
+ ```config
[libdefaults] default_realm = AADDSCONTOSO.COM clockskew = 300
To join the managed domain using **winbind** and the *`samba net` command*:
* /etc/security/pam_winbind.conf
- ```ini
+ ```config
[global] cached_login = yes krb5_auth = yes
To join the managed domain using **winbind** and the *`samba net` command*:
* /etc/nsswitch.conf
- ```ini
+ ```config
passwd: compat winbind group: compat winbind ``` 3. Check that the date and time in Azure AD and Linux are in sync. You can do this by adding the Azure AD server to the NTP service:
- 1. Add the following line to /etc/ntp.conf:
+ 1. Add the following line to `/etc/ntp.conf`:
- ```console
+ ```config
server aaddscontoso.com ``` 1. Restart the NTP service:
- ```console
+ ```bash
sudo systemctl restart ntpd ``` 4. Join the domain:
- ```console
+ ```bash
sudo net ads join -U Administrator%Mypassword ``` 5. Enable winbind as a login source in the Linux Pluggable Authentication Modules (PAM):
- ```console
- pam-config --add --winbind
+ ```bash
+ config pam-config --add --winbind
``` 6. Enable automatic creation of home directories so that users can log in:
- ```console
- pam-config -a --mkhomedir
+ ```bash
+ sudo pam-config -a --mkhomedir
``` 7. Start and enable the winbind service:
- ```console
+ ```bash
sudo systemctl enable winbind sudo systemctl start winbind ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. Open the *sshd_conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ssh/sshd_config ``` 1. Update the line for *PasswordAuthentication* to *yes*:
- ```console
+ ```config
PasswordAuthentication yes ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. To apply the changes and let users sign in using a password, restart the SSH service:
- ```console
+ ```bash
sudo systemctl restart sshd ```
To grant members of the *AAD DC Administrators* group administrative privileges
1. Open the *sudoers* file for editing:
- ```console
+ ```bash
sudo visudo ``` 1. Add the following entry to the end of */etc/sudoers* file. The *AAD DC Administrators* group contains whitespace in the name, so include the backslash escape character in the group name. Add your own domain name, such as *aaddscontoso.com*:
- ```console
+ ```config
# Add 'AAD DC Administrators' group members as admins. %AAD\ DC\ Administrators@aaddscontoso.com ALL=(ALL) NOPASSWD:ALL ```
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *linux-q2gr.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name.
- ```console
- ssh -l contosoadmin@AADDSCONTOSO.com linux-q2gr.aaddscontoso.com
+ ```bash
+ sudo ssh -l contosoadmin@AADDSCONTOSO.com linux-q2gr.aaddscontoso.com
``` 2. When you've successfully connected to the VM, verify that the home directory was initialized correctly:
- ```console
- pwd
+ ```bash
+ sudo pwd
``` You should be in the */home* directory with your own directory that matches the user account. 3. Now check that the group memberships are being resolved correctly:
- ```console
- id
+ ```bash
+ sudo id
``` You should see your group memberships from the managed domain. 4. If you signed in to the VM as a member of the *AAD DC Administrators* group, check that you can correctly use the `sudo` command:
- ```console
+ ```bash
sudo zypper update ```
active-directory-domain-services Join Ubuntu Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-ubuntu-linux-vm.md
Once the VM is deployed, follow the steps to connect to the VM using SSH.
To make sure that the VM host name is correctly configured for the managed domain, edit the */etc/hosts* file and set the hostname:
-```console
+```bash
sudo vi /etc/hosts ```
In the *hosts* file, update the *localhost* address. In the following example:
Update these names with your own values:
-```console
+```config
127.0.0.1 ubuntu.aaddscontoso.com ubuntu ```
The VM needs some additional packages to join the VM to the managed domain. To i
During the Kerberos installation, the *krb5-user* package prompts for the realm name in ALL UPPERCASE. For example, if the name of your managed domain is *aaddscontoso.com*, enter *AADDSCONTOSO.COM* as the realm. The installation writes the `[realm]` and `[domain_realm]` sections in */etc/krb5.conf* configuration file. Make sure that you specify the realm an ALL UPPERCASE:
-```console
+```bash
sudo apt-get update sudo apt-get install krb5-user samba sssd sssd-tools libnss-sss libpam-sss ntp ntpdate realmd adcli ```
For domain communication to work correctly, the date and time of your Ubuntu VM
1. Open the *ntp.conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ntp.conf ``` 1. In the *ntp.conf* file, create a line to add your managed domain's DNS name. In the following example, an entry for *aaddscontoso.com* is added. Use your own DNS name:
- ```console
+ ```config
server aaddscontoso.com ```
For domain communication to work correctly, the date and time of your Ubuntu VM
Run the following commands to complete these steps. Use your own DNS name with the `ntpdate` command:
- ```console
+ ```bash
sudo systemctl stop ntp sudo ntpdate aaddscontoso.com sudo systemctl start ntp
Now that the required packages are installed on the VM and NTP is configured, jo
1. Use the `realm discover` command to discover the managed domain. The following example discovers the realm *AADDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
- ```console
+ ```bash
sudo realm discover AADDSCONTOSO.COM ```
Now that the required packages are installed on the VM and NTP is configured, jo
Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
- ```console
- kinit -V contosoadmin@AADDSCONTOSO.COM
+ ```bash
+ sudo kinit -V contosoadmin@AADDSCONTOSO.COM
``` 1. Finally, join the VM to the managed domain using the `realm join` command. Use the same user account that's a part of the managed domain that you specified in the previous `kinit` command, such as `contosoadmin@AADDSCONTOSO.COM`:
- ```console
+ ```bash
sudo realm join --verbose AADDSCONTOSO.COM -U 'contosoadmin@AADDSCONTOSO.COM' --install=/ ```
If your VM can't successfully complete the domain-join process, make sure that t
If you received the error *Unspecified GSS failure. Minor code may provide more information (Server not found in Kerberos database)*, open the file */etc/krb5.conf* and add the following code in `[libdefaults]` section and try again:
-```console
+```config
rdns=false ```
One of the packages installed in a previous step was for System Security Service
1. Open the *sssd.conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/sssd/sssd.conf ``` 1. Comment out the line for *use_fully_qualified_names* as follows:
- ```console
+ ```config
# use_fully_qualified_names = True ```
One of the packages installed in a previous step was for System Security Service
1. To apply the change, restart the SSSD service:
- ```console
+ ```bash
sudo systemctl restart sssd ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. Open the *sshd_conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ssh/sshd_config ``` 1. Update the line for *PasswordAuthentication* to *yes*:
- ```console
+ ```config
PasswordAuthentication yes ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. To apply the changes and let users sign in using a password, restart the SSH service:
- ```console
+ ```bash
sudo systemctl restart ssh ```
By default, users can only sign in to a VM using SSH public key-based authentica
To enable automatic creation of the home directory when a user first signs in, complete the following steps:
-1. Open the */etc/pam.d/common-session* file in an editor:
+1. Open the `/etc/pam.d/common-session` file in an editor:
- ```console
+ ```bash
sudo vi /etc/pam.d/common-session ``` 1. Add the following line in this file below the line `session optional pam_sss.so`:
- ```console
+ ```config
session required pam_mkhomedir.so skel=/etc/skel/ umask=0077 ```
To grant members of the *AAD DC Administrators* group administrative privileges
1. Open the *sudoers* file for editing:
- ```console
+ ```bash
sudo visudo ``` 1. Add the following entry to the end of */etc/sudoers* file:
- ```console
+ ```config
# Add 'AAD DC Administrators' group members as admins. %AAD\ DC\ Administrators ALL=(ALL) NOPASSWD:ALL ```
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *ubuntu.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name.
- ```console
- ssh -l contosoadmin@AADDSCONTOSO.com ubuntu.aaddscontoso.com
+ ```bash
+ sudo ssh -l contosoadmin@AADDSCONTOSO.com ubuntu.aaddscontoso.com
``` 1. When you've successfully connected to the VM, verify that the home directory was initialized correctly:
- ```console
- pwd
+ ```bash
+ sudo pwd
``` You should be in the */home* directory with your own directory that matches the user account. 1. Now check that the group memberships are being resolved correctly:
- ```console
- id
+ ```bash
+ sudo id
``` You should see your group memberships from the managed domain. 1. If you signed in to the VM as a member of the *AAD DC Administrators* group, check that you can correctly use the `sudo` command:
- ```console
+ ```bash
sudo apt-get update ```
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 03/28/2023 Last updated : 03/29/2023
Use the steps in the example to provision roles for a user to your application.
![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png) - **Things to consider**
- - Ensure that multiple roles aren't assigned to a user. There is no guarantee which role is provisioned.
+ - Ensure that multiple roles aren't assigned to a user. There's no guarantee which role is provisioned.
- SingleAppRoleAssignments isn't compatible with setting scope to "Sync All users and groups." - **Example request (POST)**
Certain attributes such as phoneNumbers and emails are multi-value attributes wh
## Restoring the default attributes and attribute-mappings
-Should you need to start over and reset your existing mappings back to their default state, you can select the **Restore default mappings** check box and save the configuration. Doing so sets all mappings and scoping filters as if the application was just added to your Azure AD tenant from the application gallery.
+Should you need to start over and reset your existing mappings back to their default state, you can select the **Restore default mappings** check box and save the configuration. Doing so sets all mappings and scoping filters as if the application was added to your Azure AD tenant from the application gallery.
-Selecting this option will effectively force a resynchronization of all users while the provisioning service is running.
+Selecting this option forces a resynchronization of all users while the provisioning service is running.
> [!IMPORTANT] > We strongly recommend that **Provisioning status** be set to **Off** before invoking this option.
Selecting this option will effectively force a resynchronization of all users wh
- Updating attribute-mappings has an impact on the performance of a synchronization cycle. An update to the attribute-mapping configuration requires all managed objects to be reevaluated. - A recommended best practice is to keep the number of consecutive changes to your attribute-mappings at a minimum. - Adding a photo attribute to be provisioned to an app isn't supported today as you can't specify the format to sync the photo. You can request the feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)-- The attribute IsSoftDeleted is often part of the default mappings for an application. IsSoftdeleted can be true in one of four scenarios (the user is out of scope due to being unassigned from the application, the user is out of scope due to not meeting a scoping filter, the user has been soft deleted in Azure AD, or the property AccountEnabled is set to false on the user). It's not recommended to remove the IsSoftDeleted attribute from your attribute mappings.
+- The attribute `IsSoftDeleted` is often part of the default mappings for an application. `IsSoftdeleted` can be true in one of four scenarios: 1) The user is out of scope due to being unassigned from the application. 2) The user is out of scope due to not meeting a scoping filter. 3) The user has been soft deleted in Azure AD. 4) The property `AccountEnabled` is set to false on the user. It's not recommended to remove the `IsSoftDeleted` attribute from your attribute mappings.
- The Azure AD provisioning service doesn't support provisioning null values. - They primary key, typically "ID", shouldn't be included as a target attribute in your attribute mappings. - The role attribute typically needs to be mapped using an expression, rather than a direct mapping. For more information about role mapping, see [Provisioning a role to a SCIM app](#Provisioning a role to a SCIM app).
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 02/13/2023 Last updated : 03/30/2023
To request an automatic Azure AD provisioning connector for an app that doesn't
## Authorization
-Credentials are required for Azure AD to connect to the application's user management API. While you're configuring automatic user provisioning for an application, you need to enter valid credentials. For gallery applications, you can find credential types and requirements for the application by referring to the app tutorial. For non-gallery applications, you can refer to the [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery) documentation to understand the credential types and requirements. In the Azure portal, you are able to test the credentials by having Azure AD attempt to connect to the app's provisioning app using the supplied credentials.
+Credentials are required for Azure AD to connect to the application's user management API. While you're configuring automatic user provisioning for an application, you need to enter valid credentials. For gallery applications, you can find credential types and requirements for the application by referring to the app tutorial. For non-gallery applications, you can refer to the [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery) documentation to understand the credential types and requirements. In the Azure portal, you're able to test the credentials by having Azure AD attempt to connect to the app's provisioning app using the supplied credentials.
## Mapping attributes
After the initial cycle, all other cycles will:
The provisioning service continues running back-to-back incremental cycles indefinitely, at intervals defined in the [tutorial specific to each application](../saas-apps/tutorial-list.md). Incremental cycles continue until one of the following events occurs: - The service is manually stopped using the Azure portal, or using the appropriate Microsoft Graph API command.-- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. This action clears any stored watermark and causes all source objects to be evaluated again. This will not break the links between source and target objects. To break the links use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the following request:
+- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. This action clears any stored watermark and causes all source objects to be evaluated again. This won't break the links between source and target objects. To break the links use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the following request:
<!-- { "blockType": "request",
The provisioning service supports both deleting and disabling (sometimes referre
**Configure your application to disable a user**
-Ensure that you have selected the checkbox for updates.
+Confirm the checkobx for updates is selected.
-Ensure that you have the mapping for *active* for your application. If your using an application from the app gallery, the mapping may be slightly different. Please ensure that you use the default / out of the box mapping for gallery applications.
+Confirm the mapping for *active* for your application. If your using an application from the app gallery, the mapping may be slightly different. In this case, use the default mapping for gallery applications.
:::image type="content" source="./media/how-provisioning-works/disable-user.png" alt-text="Disable a user" lightbox="./media/how-provisioning-works/disable-user.png":::
Ensure that you have the mapping for *active* for your application. If your usin
The following scenarios will trigger a disable or a delete: * A user is soft deleted in Azure AD (sent to the recycle bin / AccountEnabled property set to false).
- 30 days after a user is deleted in Azure AD, they will be permanently deleted from the tenant. At this point, the provisioning service will send a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application.
+ 30 days after a user is deleted in Azure AD, they're permanently deleted from the tenant. At this point, the provisioning service sends a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application.
* A user is permanently deleted / removed from the recycle bin in Azure AD. * A user is unassigned from an app. * A user goes from in scope to out of scope (doesn't pass a scoping filter anymore).
The following scenarios will trigger a disable or a delete:
By default, the Azure AD provisioning service soft deletes or disables users that go out of scope. If you want to override this default behavior, you can set a flag to [skip out-of-scope deletions.](skip-out-of-scope-deletions.md)
-If one of the above four events occurs and the target application does not support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
+If one of the above four events occurs and the target application doesn't support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
-If you see an attribute IsSoftDeleted in your attribute mappings, it is used to determine the state of the user and whether to send an update request with active = false to soft delete the user.
+If you see an attribute IsSoftDeleted in your attribute mappings, it's used to determine the state of the user and whether to send an update request with active = false to soft delete the user.
**Deprovisioning events**
The following table describes how you can configure deprovisioning actions with
|--|--| |If a user is unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, do nothing.|Remove isSoftDeleted from the attribute mappings and / or set the [skip out of scope deletions](skip-out-of-scope-deletions.md) property to true.| |If a user is unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, set a specific attribute to true / false.|Map isSoftDeleted to the attribute that you would like to set to false.|
-|When a user is disabled in Azure AD, unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, send a DELETE request to the target application.|This is currently supported for a limited set of gallery applications where the functionality is required. It is not configurable by customers.|
-|When a user is deleted in Azure AD, do nothing in the target application.|Ensure that "Delete" is not selected as one of the target object actions in the [attriubte configuration experience](skip-out-of-scope-deletions.md).|
+|When a user is disabled in Azure AD, unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, send a DELETE request to the target application.|This is currently supported for a limited set of gallery applications where the functionality is required. It's not configurable by customers.|
+|When a user is deleted in Azure AD, do nothing in the target application.|Ensure that "Delete" isn't selected as one of the target object actions in the [attriubte configuration experience](skip-out-of-scope-deletions.md).|
|When a user is deleted in Azure AD, set the value of an attribute in the target application.|Not supported.| |When a user is deleted in Azure AD, delete the user in the target application|This is supported. Ensure that Delete is selected as one of the target object actions in the [attribute configuration experience](skip-out-of-scope-deletions.md).| **Known limitations**
-* If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app we will send a disable request. At that point, the user is not managed by the service and we will not send a delete request when they are deleted from the directory.
-* Provisioning a user that is disabled in Azure AD is not supported. They must be active in Azure AD before they are provisioned.
-* When a user goes from soft-deleted to active, the Azure AD provisioning service will activate the user in the target app, but will not automatically restore the group memberships. The target application should maintain the group memberships for the user in inactive state. If the target application does not support this, you can restart provisioning to update the group memberships.
+* If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app we will send a disable request. At that point, the user isn't managed by the service and we won't send a delete request when they're deleted from the directory.
+* Provisioning a user that is disabled in Azure AD isn't supported. They must be active in Azure AD before they're provisioned.
+* When a user goes from soft-deleted to active, the Azure AD provisioning service will activate the user in the target app, but won't automatically restore the group memberships. The target application should maintain the group memberships for the user in inactive state. If the target application doesn't support this, you can restart provisioning to update the group memberships.
**Recommendation**
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 01/29/2023 Last updated : 03/27/2023
Migrating user data doesn't remove or alter any data in the Multi-Factor Authent
The MFA Server Migration utility targets a single Azure AD group for all migration activities. You can add users directly to this group, or add other groups. You can also add them in stages during the migration.
-To begin the migration process, enter the name or GUID of the Azure AD group you want to migrate. Once complete, press Tab or click outside the window and the utility will begin searching for the appropriate group. The window will populate all users in the group. A large group can take several minutes to finish.
+To begin the migration process, enter the name or GUID of the Azure AD group you want to migrate. Once complete, press Tab or click outside the window to begin searching for the appropriate group. All users in the group are populated. A large group can take several minutes to finish.
To view attribute data for a user, highlight the user, and select **View**: :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/view-user.png" alt-text="Screenshot of how to view use settings.":::
-This window displays the attributes for the selected user in both Azure AD and the on-premises MFA Server. You can use this window to view how data was written to a user after theyΓÇÖve been migrated.
+This window displays the attributes for the selected user in both Azure AD and the on-premises MFA Server. You can use this window to view how data was written to a user after migration.
-The settings option allows you to change the settings for the migration process:
+The **Settings** option allows you to change the settings for the migration process:
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/settings.png" alt-text="Screenshot of settings.":::
The settings option allows you to change the settings for the migration process:
- The migration utility tries direct matching to UPN before using the on-premises Active Directory attribute. - If no match is found, it calls a Windows API to find the Azure AD UPN and get the SID, which it uses to search the MFA Server user list. - If the Windows API doesnΓÇÖt find the user or the SID isnΓÇÖt found in the MFA Server, then it will use the configured Active Directory attribute to find the user in the on-premises Active Directory, and then use the SID to search the MFA Server user list.-- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined
+- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined.
+- Synchronization server ΓÇô Allows the MFA Server Migration Sync service to run on a secondary MFA Server rather than only run on the primary. To configure the Migration Sync service to run on a secondary server, the `Configure-MultiFactorAuthMigrationUtility.ps1` script must be run on the server to register a certificate with the MFA Server Migration Utility app registration. The certificate is used to authenticate to Microsoft Graph.
-The migration process can be an automatic process, or a manual process.
+The migration process can be automatic or manual.
The manual process steps are: 1. To begin the migration process for a user or selection of multiple users, press and hold the Ctrl key while selecting each of the user(s) you wish to migrate. 1. After you select the desired users, click **Migrate Users** > **Selected users** > **OK**. 1. To migrate all users in the group, click **Migrate Users** > **All users in AAD group** > **OK**.
+1. You can migrate users even if they are unchanged. By default, the utility is set to **Only migrate users that have changed**. Click **Migrate all users** to re-migrate previously migrated users that are unchanged. Migrating unchanged users can be useful during testing if an administrator needs to reset a userΓÇÖs Azure MFA settings and wants to re-migrate them.
-For the automatic process, click **Automatic synchronization** in the settings dialog, and then select whether you want all users to be synced, or only members of a given Azure AD group.
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/migrate-users.png" alt-text="Screenshot of Migrate users dialog.":::
+
+For the automatic process, click **Automatic synchronization** in **Settings**, and then select whether you want all users to be synced, or only members of a given Azure AD group.
The following table lists the sync logic for the various methods.
active-directory Concept Conditional Access Report Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-report-only.md
Previously updated : 01/24/2023 Last updated : 03/30/2023
Report-only mode is a new Conditional Access policy state that allows administra
> [!WARNING] > Policies in report-only mode that require compliant devices may prompt users on Mac, iOS, and Android to select a device certificate during policy evaluation, even though device compliance is not enforced. These prompts may repeat until the device is made compliant. To prevent end users from receiving prompts during sign-in, exclude device platforms Mac, iOS and Android from report-only policies that perform device compliance checks. Note that report-only mode is not applicable for Conditional Access policies with "User Actions" scope.
-![Report-only tab in Azure AD sign-in log](./media/concept-conditional-access-report-only/report-only-detail-in-sign-in-log.png)
+![Screenshot showing the report-only tab in a sign-in log.](./media/concept-conditional-access-report-only/report-only-detail-in-sign-in-log.png)
## Policy results
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
Previously updated : 02/27/2023 Last updated : 03/28/2023
For more information, see the article [Configure authentication session manageme
- **Disable** only work when **All cloud apps** are selected, no conditions are selected, and **Disable** is selected under **Session** > **Customize continuous access evaluation** in a Conditional Access policy. You can choose to disable all users or specific users and groups. - :::image type="content" source="media/concept-conditional-access-session/continuous-access-evaluation-session-controls.png" alt-text="CAE Settings in a new Conditional Access policy in the Azure portal." lightbox="media/concept-conditional-access-session/continuous-access-evaluation-session-controls.png":::
-## Disable resilience defaults (Preview)
+## Disable resilience defaults
During an outage, Azure AD extends access to existing sessions while enforcing Conditional Access policies. If resilience defaults are disabled, access is denied once existing sessions expire. For more information, see the article [Conditional Access: Resilience defaults](resilience-defaults.md).
+## Require token protection for sign-in sessions (preview)
+
+Token protection (sometimes referred to as token binding in the industry) attempts to reduce attacks using token theft by ensuring a token is usable only from the intended device. When an attacker is able to steal a token, by hijacking or replay, they can impersonate their victim until the token expires or is revoked. Token theft is thought to be a relatively rare event, but the damage from it can be significant.
+
+The preview works for specific scenarios only. For more information, see the article [Conditional Access: Token protection (preview)](concept-token-protection.md).
+ ## Next steps - [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
Previously updated : 02/27/2023 Last updated : 03/28/2023
To access the insights and reporting workbook:
The insights and reporting dashboard lets you see the impact of one or more Conditional Access policies over a specified period. Start by setting each of the parameters at the top of the workbook.
-![Conditional Access Insights and Reporting dashboard in the Azure portal](./media/howto-conditional-access-insights-reporting/conditional-access-insights-and-reporting-dashboard.png)
**Conditional Access policy**: Select one or more Conditional Access policies to view their combined impact. Policies are separated into two groups: Enabled and Report-only policies. By default, all Enabled policies are selected. These enabled policies are the policies currently enforced in your tenant.
The insights and reporting dashboard lets you see the impact of one or more Cond
Once the parameters have been set, the impact summary loads. The summary shows how many users or sign-ins during the time range resulted in ΓÇ£SuccessΓÇ¥, ΓÇ£FailureΓÇ¥, ΓÇ¥User action requiredΓÇ¥ or ΓÇ£Not appliedΓÇ¥ when the selected policies were evaluated.
-![Impact summary in the Conditional Access workbook](./media/howto-conditional-access-insights-reporting/workbook-impact-summary.png)
+![Screenshot showing an example impact summary in the Conditional Access workbook.](./media/howto-conditional-access-insights-reporting/workbook-impact-summary.png)
**Total**: The number of users or sign-ins during the time period where at least one of the selected policies was evaluated.
Once the parameters have been set, the impact summary loads. The summary shows h
### Understanding the impact
-![Workbook breakdown per condition and status](./media/howto-conditional-access-insights-reporting/workbook-breakdown-condition-and-status.png)
+![Screenshot showing a workbook breakdown per condition and status.](./media/howto-conditional-access-insights-reporting/workbook-breakdown-condition-and-status.png)
View the breakdown of users or sign-ins for each of the conditions. You can filter the sign-ins of a particular result (for example, Success or Failure) by selecting on of the summary tiles at the top of the workbook. You can see the breakdown of sign-ins for each of the Conditional Access conditions: device state, device platform, client app, location, application, and sign-in risk. ## Sign-in details
-![Workbook sign-in details](./media/howto-conditional-access-insights-reporting/workbook-sign-in-details.png)
+![Screenshot showing workbook sign-in details.](./media/howto-conditional-access-insights-reporting/workbook-sign-in-details.png)
-You can also investigate the sign-ins of a specific user by searching for sign-ins at the bottom of the dashboard. The query on the left displays the most frequent users. Selecting a user filters the query to the right.
+You can also investigate the sign-ins of a specific user by searching for sign-ins at the bottom of the dashboard. The query displays the most frequent users. Selecting a user filters the query.
> [!NOTE] > When downloading the Sign-ins logs, choose JSON format to include Conditional Access report-only result data.
In order to access the workbook, you need the proper Azure AD permissions and Lo
1. Type `SigninLogs` into the query box and select **Run**. 1. If the query doesn't return any results, your workspace may not have been configured correctly.
-![Troubleshoot failing queries](./media/howto-conditional-access-insights-reporting/query-troubleshoot-sign-in-logs.png)
+![Screenshot showing how to troubleshoot failing queries.](./media/howto-conditional-access-insights-reporting/query-troubleshoot-sign-in-logs.png)
For more information about how to stream Azure AD sign-in logs to a Log Analytics workspace, see the article [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
After confirming your settings using [report-only mode](howto-conditional-access
[Conditional Access common policies](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Migrate approved client app to application protection policy in Conditional Access](migrate-approved-client-app.md)
active-directory Migrate Approved Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/migrate-approved-client-app.md
+
+ Title: Migrate approved client app to application protection policy in Conditional Access
+description: The approved client app control is going away. Migrate to App protection policies.
+++++ Last updated : 03/28/2023++++++++
+# Migrate approved client app to application protection policy in Conditional Access
+
+In this article, you learn how to migrate from the approved client app Conditional Access grant to the application protection policy grant. App protection policies provide the same data loss and protection as approved client app policies, but with other benefits. For more information about the benefits of using app protection policies, see the articleΓÇ»[App protection policies overview](/mem/intune/apps/app-protection-policy).
+
+The approved client app grant is retiring in early March 2026. Organizations must transition all current Conditional Access policies that use only the Require Approved Client App grant to Require Approved Client App or Application Protection Policy by March 2026. Additionally, for any new Conditional Access policy, only apply the Require application protection policy grant.
+
+After March 2026, Microsoft will stop enforcing require approved client app control, and it will be as if this grant isn't selected. Use the following steps before March 2026 to protect your organizationΓÇÖs data.
+
+## Edit an existing Conditional Access policy
+
+Require approved client apps or app protection policy with mobile devices
+
+The following steps make an existing Conditional Access policy require an approved client app or an app protection policy when using an iOS/iPadOS or Android device. This policy works in tandem with an app protection policy created in Microsoft Intune.
+
+Organizations can choose to update their policies using the following steps.
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select a policy that uses the approved client app grant.
+1. Under **Access controls** > **Grant**, select **Grant access**.
+ 1. Select **Require approved client app** and **Require app protection policy**
+ 1. **For multiple controls** select **Require one of the selected controls**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+Repeat the previous steps on all of your policies that use the approved client app grant.
+
+> [!WARNING]
+> Not all applications that are supported as approved applications or support application protection policies. For a list of some common client apps, seeΓÇ»[App protection policy requirement](concept-conditional-access-grant.md#require-app-protection-policy). If your application is not listed there, contact the application developer.
+
+## Create a Conditional Access policy
+
+Require app protection policy with mobile devices
+
+The following steps help create a Conditional Access policy requiring an approved client app or an app protection policy when using an iOS/iPadOS or Android device. This policy works in tandem with an [app protection policy created in Microsoft Intune](/mem/intune/apps/app-protection-policies).
+
+Organizations can choose to deploy this policy using the following steps.
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy.
+1. Under **Cloud apps or actions**, select **All cloud apps**.
+1. Under **Conditions** > **Device platforms**, set **Configure** to **Yes**.
+ 1. Under **Include**, **Select device platforms**.
+ 1. Choose **Android** and **iOS**
+ 1. Select **Done**.
+1. Under **Access controls** > **Grant**, select **Grant access**.
+ 1. Select **Require approved client app** and **Require app protection policy**
+ 1. **For multiple controls** select **Require one of the selected controls**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+> [!NOTE]
+> If an app does not support **Require app protection policy**, end users trying to access resources from that app will be blocked.
+
+## Next steps
+
+For more information on application protection policies, see:
+
+[App protection policies overview](/mem/intune/apps/app-protection-policy)
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Previously updated : 05/26/2022 Last updated : 03/30/2023
For more videos, see:
## What can I do with terms of use?
-Azure AD terms of use policies have the following capabilities:
--- Require employees or guests to accept your terms of use policy before getting access.-- Require employees or guests to accept your terms of use policy on every device before getting access.-- Require employees or guests to accept your terms of use policy on a recurring schedule.-- Require employees or guests to accept your terms of use policy before registering security information in Azure AD Multifactor Authentication (MFA).-- Require employees to accept your terms of use policy before registering security information in Azure AD self-service password reset (SSPR).-- Present a general terms of use policy for all users in your organization.-- Present specific terms of use policies based on a user attributes (such as doctors versus nurses, or domestic versus international employees) by using [dynamic groups](../enterprise-users/groups-dynamic-membership.md)).-- Present specific terms of use policies when accessing high business impact applications, like Salesforce.-- Present terms of use policies in different languages.-- List who has or hasn't accepted to your terms of use policies.-- Help meeting privacy regulations.-- Display a log of terms of use policy activity for compliance and audit.-- Create and manage terms of use policies using [Microsoft Graph APIs](/graph/api/resources/agreement).
+Organizations can use terms of use along with Conditional Access policies to require employees or guests to accept your terms of use policy before getting access. These terms of use statements can be generalized or specific to groups or users and provided in multiple languages. Administrators can determine who has or hasn't accepted terms of use with the provided logs or APIs.
## Prerequisites To use and configure Azure AD terms of use policies, you must have: -- Azure AD Premium P1, P2, EMS E3, or EMS E5 licenses.
- - If you don't have one of these subscriptions, you can [get Azure AD Premium](../fundamentals/active-directory-get-started-premium.md) or [enable Azure AD Premium trial](https://azure.microsoft.com/trial/get-started-active-directory/).
-- One of the following administrator accounts for the directory you want to configure:
- - Global Administrator
- - Security Administrator
- - Conditional Access Administrator
+* A working Azure AD tenant with Azure AD Premium P1, or trial license enabled. If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Administrators who interact with terms of use must have one or more of the following role assignments depending on the tasks they're performing. To follow the [Zero Trust principle of least privilege](/security/zero-trust/), consider using [Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) to just-in-time activate privileged role assignments.
+ * Read terms of use configuration and Conditional Access policies
+ * [Security Reader](../roles/permissions-reference.md#security-reader)
+ * [Global Reader](../roles/permissions-reference.md#global-reader)
+ * Create or modify terms of use and Conditional Access policies
+ * [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator)
+ * [Security Administrator](../roles/permissions-reference.md#security-administrator)
## Terms of use document
Azure AD terms of use policies use the PDF format to present content. The PDF fi
Once you've completed your terms of use policy document, use the following procedure to add it.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator or Security Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select, **New terms**. ![New term of use pane to specify your terms of use settings](./media/terms-of-use/new-tou.png)
-1. In the **Name** box, enter a name for the terms of use policy that will be used in the Azure portal.
+1. In the **Name** box, enter a name for the terms of use policy used in the Azure portal.
1. For **Terms of use document**, browse to your finalized terms of use policy PDF and select it.
-1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user will see will be based on their browser preferences.
+1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user sees is based on their browser preferences.
1. In the **Display name** box, enter a title that users see when they sign in. 1. To require end users to view the terms of use policy before accepting them, set **Require users to expand the terms of use** to **On**. 1. To require end users to accept your terms of use policy on every device they're accessing from, set **Require users to consent on every device** to **On**. Users may be required to install other applications if this option is enabled. For more information, see [Per-device terms of use](#per-device-terms-of-use).
Once you've completed your terms of use policy document, use the following proce
| Expire starting on | Frequency | Result | | | | | | Today's date | Monthly | Starting today, users must accept the terms of use policy and then reaccept every month. |
- | Date in the future | Monthly | Starting today, users must accept the terms of use policy. When the future date occurs, consents will expire, and then users must reaccept every month. |
+ | Date in the future | Monthly | Starting today, users must accept the terms of use policy. When the future date occurs, consents expire, and then users must reaccept every month. |
For example, if you set the expire starting on date to **Jan 1** and frequency to **Monthly**, this is how expirations might occur for two users:
Once you've completed your terms of use policy document, use the following proce
| Alice | Jan 1 | Feb 1 | Mar 1 | Apr 1 | | Bob | Jan 15 | Feb 1 | Mar 1 | Apr 1 |
-1. Use the **Duration before re-acceptance required (days)** setting to specify the number of days before the user must reaccept the terms of use policy. This allows users to follow their own schedule. For example, if you set the duration to **30** days, this is how expirations might occur for two users:
+1. Use the **Duration before re-acceptance required (days)** setting to specify the number of days before the user must reaccept the terms of use policy. This option allows users to follow their own schedule. For example, if you set the duration to **30** days, this is how expirations might occur for two users:
| User | First accept date | First expire date | Second expire date | Third expire date | | | | | | |
Once you've completed your terms of use policy document, use the following proce
| Template | Description | | | |
- | **Custom policy** | Select the users, groups, and apps that this terms of use policy will be applied to. |
- | **Create Conditional Access policy later** | This terms of use policy will appear in the grant control list when creating a Conditional Access policy. |
+ | **Custom policy** | Select the users, groups, and apps that this terms of use policy is applied to. |
+ | **Create Conditional Access policy later** | This terms of use policy appears in the grant control list when creating a Conditional Access policy. |
> [!IMPORTANT] > Conditional Access policy controls (including terms of use policies) do not support enforcement on service accounts. We recommend excluding all service accounts from the Conditional Access policy.
To get started with Azure AD audit logs, use the following procedure:
## What terms of use looks like for users
-Once a ToU policy is created and enforced, users, who are in scope, will see the following screen during sign-in.
+Once a ToU policy is created and enforced, users, who are in scope, see the following screen during sign-in.
![Example terms of use that appears when a user signs in](./media/terms-of-use/user-tou.png)
You can edit some details of terms of use policies, but you can't modify an exis
1. In the Edit terms of use pane, you can change the following options: - **Name** ΓÇô the internal name of the ToU that isn't shared with end users - **Display name** ΓÇô the name that end users can see when viewing the ToU
- - **Require users to expand the terms of use** ΓÇô Setting this option to **On** will force the end user to expand the terms of use policy document before accepting it.
+ - **Require users to expand the terms of use** ΓÇô Setting this option to **On** forces the end user to expand the terms of use policy document before accepting it.
- (Preview) You can **update an existing terms of use** document - You can add a language to an existing ToU
You can edit some details of terms of use policies, but you can't modify an exis
![Edit terms of use pane showing name and expand options](./media/terms-of-use/edit-terms-use.png) 1. In the pane on the right, upload the pdf for the new version
-1. There's also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they'll be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent will stay current and only new users who haven't consented before or whose consent expires will see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept, delete and recreate or create a new TOU for this case.
+1. There's also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they'll be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent stays current and only new users who haven't consented before or whose consent expires see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept, delete and recreate or create a new TOU for this case.
![Edit terms of use re-accept option highlighted](./media/terms-of-use/re-accept.png) 1. Once you've uploaded your new pdf and decided on reaccept, select Add at the bottom of the pane.
-1. You'll now see the most recent version under the Document column.
+1. You see the most recent version under the Document column.
## View previous versions of a ToU
The following procedure describes how to add a ToU language.
## Per-device terms of use
-The **Require users to consent on every device** setting enables you to require end users to accept your terms of use policy on every device they're accessing from. The end user will be required to register their device in Azure AD. When the device is registered, the device ID is used to enforce the terms of use policy on each device.
+The **Require users to consent on every device** setting enables you to require end users to accept your terms of use policy on every device they're accessing from. The end user is required to register their device in Azure AD. When the device is registered, the device ID is used to enforce the terms of use policy on each device.
Supported platforms and software.
Supported platforms and software.
> | **Internet Explorer** | Yes | Yes | Yes | | > | **Chrome (with extension)** | Yes | Yes | Yes | |
-Per-device terms of use has the following constraints:
+Per-device terms of use have the following constraints:
- A device can only be joined to one tenant. - A user must have permissions to join their device. - The Intune Enrollment app isn't supported. Ensure that it's excluded from any Conditional Access policy requiring Terms of Use policy. - Azure AD B2B users aren't supported.
-If the user's device isn't joined, they'll receive a message that they need to join their device. Their experience will be dependent on the platform and software.
+If the user's device isn't joined, they receive a message that they need to join their device. Their experience is dependent on the platform and software.
### Join a Windows 10 device
If a user is using Windows 10 and Microsoft Edge, they receive a message similar
![Windows 10 and Microsoft Edge - Message indicating your device must be registered](./media/terms-of-use/per-device-win10-edge.png)
-If they're using Chrome, they'll be prompted to install the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji).
+If they're using Chrome, they're prompted to install the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji).
### Register an iOS device
-If a user is using an iOS device, they'll be prompted to install the [Microsoft Authenticator app](https://apps.apple.com/us/app/microsoft-authenticator/id983156458).
+If a user is using an iOS device, they're prompted to install the [Microsoft Authenticator app](https://apps.apple.com/us/app/microsoft-authenticator/id983156458).
### Register an Android device
-If a user is using an Android device, they'll be prompted to install the [Microsoft Authenticator app](https://play.google.com/store/apps/details?id=com.azure.authenticator).
+If a user is using an Android device, they're prompted to install the [Microsoft Authenticator app](https://play.google.com/store/apps/details?id=com.azure.authenticator).
### Browsers
-If a user is using browser that isn't supported, they'll be asked to use a different browser.
+If a user is using browser that isn't supported, they're asked to use a different browser.
![Message indicating your device must be registered, but browser is not supported](./media/terms-of-use/per-device-browser-unsupported.png)
User acceptance records are deleted:
## Policy changes
-Conditional Access policies take effect immediately. When this happens, the administrator will start to see ΓÇ£sad cloudsΓÇ¥ or "Azure AD token issues". The administrator must sign out and sign in to satisfy the new policy.
+Conditional Access policies take effect immediately. When this happens, the administrator starts to see ΓÇ£sad cloudsΓÇ¥ or "Azure AD token issues". The administrator must sign out and sign in to satisfy the new policy.
> [!IMPORTANT] > Users in scope will need to sign-out and sign-in in order to satisfy a new policy if:
Terms of use policies can be used for different cloud apps, such as Azure Inform
### Azure Information Protection
-You can configure a Conditional Access policy for the Azure Information Protection app and require a terms of use policy when a user accesses a protected document. This configuration will trigger a terms of use policy before a user accessing a protected document for the first time.
+You can configure a Conditional Access policy for the Azure Information Protection app and require a terms of use policy when a user accesses a protected document. This configuration triggers a terms of use policy before a user accessing a protected document for the first time.
![Cloud apps pane with Microsoft Azure Information Protection app selected](./media/terms-of-use/cloud-app-info-protection.png)
A: The user counts in the terms of use report and who accepted/declined are stor
A: The terms of use details overview data is stored for the lifetime of that terms of use policy, while the Azure AD audit logs are stored for 30 days. **Q: Why do I see a different number of consents in the terms of use details overview versus the exported CSV report?**<br />
-A: The terms of use details overview reflects aggregated acceptances of the current version of the policy (updated once every day). If expiration is enabled or a TOU agreement is updated (with re-acceptance required), the count on the details overview is reset since the acceptances are expired, thereby showing the count of the current version. All acceptance history is still captured in the CSV report.
+A: The terms of use details overview reflect aggregated acceptances of the current version of the policy (updated once every day). If expiration is enabled or a TOU agreement is updated (with reacceptance required), the count on the details overview is reset since the acceptances are expired, thereby showing the count of the current version. All acceptance history is still captured in the CSV report.
**Q: If hyperlinks are in the terms of use policy PDF document, will end users be able to click them?**<br /> A: Yes, end users are able to select hyperlinks to other pages but links to sections within the document aren't supported. Also, hyperlinks in terms of use policy PDFs don't work when accessed from the Azure AD MyApps/MyAccount portal.
A: The user is blocked from getting access to the application. The user would ha
A: You can [review previously accepted terms of use policies](#how-users-can-review-their-terms-of-use), but currently there isn't a way to unaccept. **Q: What happens if I'm also using Intune terms and conditions?**<br />
-A: If you've configured both Azure AD terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user will be required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
+A: If you've configured both Azure AD terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user is required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
**Q: What endpoints does the terms of use service use for authentication?**<br />
-A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com, https://myaccount.microsoft.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you'll need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
+A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com, https://myaccount.microsoft.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
## Next steps
active-directory App Only Access Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-only-access-primer.md
In most cases, application-only access is broader and more powerful than [delega
In contrast, you should never use application-only access where a user would normally sign in to manage their own resources. These types of scenarios must use delegated access to be least privileged.
-![Diagram shows illustration of application permissions vs delegated permissions.](./media/permissions-consent-overview/delegated-app-only-permissions.png)
+![Diagram shows illustration of application permissions vs delegated permissions.](./media/app-only-access-primer/app-only-access.png)
active-directory Console Quickstart Portal Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-quickstart-portal-nodejs.md
-+ Last updated 08/22/2022
active-directory Daemon Quickstart Portal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-java.md
-+ Last updated 08/22/2022
active-directory Daemon Quickstart Portal Netcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-netcore.md
-+ Last updated 08/22/2022
active-directory Daemon Quickstart Portal Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-python.md
-+ Last updated 08/22/2022
active-directory Delegated Access Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-access-primer.md
People frequently use different applications to access their data from cloud ser
Use delegated access whenever you want to let a signed-in user work with their own resources or resources they can access. Whether itΓÇÖs an admin setting up policies for their entire organization or a user deleting an email in their inbox, all scenarios involving user actions should use delegated access.
-![Diagram shows illustration of delegated permissions vs application permissions.](./media/permissions-consent-overview/delegated-app-only-permissions.png)
+![Diagram shows illustration of delegated access scenario.](./media/delegated-access-primer/delegated-access.png)
In contrast, delegated access is usually a poor choice for scenarios that must run without a signed-in user, like automation. It may also be a poor choice for scenarios that involve accessing many usersΓÇÖ resources, like data loss prevention or backups. Consider using [application-only access](permissions-consent-overview.md) for these types of operations.
active-directory Desktop Quickstart Portal Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-nodejs-desktop.md
-+ Last updated 08/18/2022
active-directory Desktop Quickstart Portal Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-uwp.md
-+ Last updated 08/18/2022
active-directory Desktop Quickstart Portal Wpf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-wpf.md
-+ Last updated 08/18/2022
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
-+ Last updated 02/15/2022
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
-+ Last updated 02/15/2022
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-android.md
-+ Last updated 01/14/2022
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
-+ Last updated 12/09/2022
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
-+ Last updated 01/11/2022
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-ios.md
-+ Last updated 01/14/2022
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-daemon.md
-+ Last updated 01/10/2022
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-webapp.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
-+ Last updated 11/12/2021
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
-+ Last updated 11/12/2021
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
-+ Last updated 11/12/2021
active-directory Quickstart V2 Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript.md
-+ Last updated 04/11/2019
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
-+ Last updated 01/10/2022
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-console.md
-+ Last updated 01/10/2022
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
-+ Last updated 01/14/2022
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-daemon.md
-+ Last updated 01/10/2022
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-webapp.md
-+ Last updated 01/24/2023
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-uwp.md
-+ Last updated 01/14/2022
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-windows-desktop.md
-+ Last updated 01/14/2022
active-directory Spa Quickstart Portal Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-angular.md
-+ Last updated 08/16/2022
active-directory Spa Quickstart Portal Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-react.md
-+ Last updated 08/16/2022
active-directory Spa Quickstart Portal Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code.md
-+ Last updated 08/16/2022
active-directory Web Api Quickstart Portal Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-aspnet-core.md
-+ Last updated 08/16/2022
active-directory Web Api Quickstart Portal Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-dotnet-native-aspnet.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet-core.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-java.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Node Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-python.md
-+ Last updated 08/16/2022
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Because subdomains inherit the authentication type of the root domain by default
Use the following command to promote the subdomain: ```http
-POST https://graph.microsoft.com/{tenant-id}/domains/foo.contoso.com/promote
+POST https://graph.microsoft.com/v1.0/{tenant-id}/domains/foo.contoso.com/promote
``` ### Promote command error conditions
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 03/28/2023 Last updated : 03/30/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on March 28th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on March 30th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 F5 Compliance Add-on GCC | SPE_F5_COMP_GCC | 3f17cf90-67a2-4fdb-8587-37c1539507e1 | Customer Lockbox for Government (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery for Government (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | Microsoft 365 F5 Security Add-on | SPE_F5_SEC | 67ffe999-d9ca-49e1-9d2c-03fb28aa7a48 | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | | Microsoft 365 F5 Security + Compliance Add-on | SPE_F5_SECCOMP | 32b47245-eb31-44fc-b945-a8b1576c439f | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) |
-| Microsoft Flow Free | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | COMMON DATA SERVICE - VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170) |
+| Microsoft Power Automate Free | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | COMMON DATA SERVICE (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170) |
| Microsoft 365 E5 Suite Features | M365_E5_SUITE_COMPONENTS | 99cc8282-2f74-4954-83b7-c6a9a1999067 | Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e) | Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e) | | Microsoft 365 F1 | M365_F1_COMM | 50f60901-3181-4b75-8a2c-4c8e4c1d5a72 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/> RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Viva Engage Core (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F3 GCC | M365_F1_GOV | 2a914830-d700-444a-b73c-e3f31980d833 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_F1_GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>CDS_O365_F1_GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>EXCHANGE_S_DESKLESS_GOV (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>FORMS_GOV_F1 (bfd4133a-bbf3-4212-972b-60412137c428)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K_GOV (d65648f1-9504-46e4-8611-2658763f28b8)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708- 6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_S1_GOV (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>FLOW_O365_S1_GOV (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SHAREPOINTDESKLESS_GOV (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>MCOIMP_GOV (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service - O365 F1 GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>Common Data Service for Teams_F1 GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>Exchange Online (Kiosk) for Government (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>Forms for Government (Plan F1) (bfd4133a-bbf3-4212-972b-60412137c428)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (F1) (d65648f1-9504-46e4-8611-2658763f28b8)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>Power Apps for Office 365 F3 for Government (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>Power Automate for Office 365 F3 for Government (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SharePoint KioskG (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>Skype for Business Online (Plan 1) for Government (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
active-directory Concept Secure Remote Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md
Previously updated : 02/27/2023 Last updated : 03/28/2023
There are many recommendations that Azure AD Free, Office 365, or Microsoft 365
| [Enable ADFS smart lock out](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection) (If applicable) | Protects your users from experiencing extranet account lockout from malicious activity. | | [Enable Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) (if using managed identities) | Smart lockout helps to lock out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. | | [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. |
-| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO) |
+| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of preintegrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO) |
| [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable) | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. | | [Enable self-service password reset](../authentication/tutorial-enable-sspr.md) (applicable to cloud only accounts) | This ability reduces help desk calls and loss of productivity when a user can't sign into their device or an application. |
The following table is intended to highlight the key actions for the following l
| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. | | [Enable remote access to on-premises legacy applications with Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md) | Enable Azure AD Application Proxy and integrate with legacy apps for users to securely access on-premises applications by signing in with their Azure AD account. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable). | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. |
-| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). |
+| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of preintegrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). |
| [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Conditional Access ΓÇô Device based](../conditional-access/require-managed-devices.md) | Improve security and user experiences with device-based Conditional Access. This step ensures users can only access from devices that meet your standards for security and compliance. These devices are also known as managed devices. Managed devices can be Intune compliant or Hybrid Azure AD joined devices. | | [Enable Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md) | Protect users from using weak and easy to guess passwords. |
The following table is intended to highlight the key actions for the following l
| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. | | [Enable remote access to on-premises legacy applications with Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md) | Enable Azure AD Application Proxy and integrate with legacy apps for users to securely access on-premises applications by signing in with their Azure AD account. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable). | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. |
-| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). |
+| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of preintegrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). |
| [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Conditional Access ΓÇô Device based](../conditional-access/require-managed-devices.md) | Improve security and user experiences with device-based Conditional Access. This step ensures users can only access from devices that meet your standards for security and compliance. These devices are also known as managed devices. Managed devices can be Intune compliant or Hybrid Azure AD joined devices. | | [Enable Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md) | Protect users from using weak and easy to guess passwords. |
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## March 2023
++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - March 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Acunetix 360](../saas-apps/acunetix-360-provisioning-tutorial.md)
+- [Akamai Enterprise Application Access](../saas-apps/akamai-enterprise-application-access-provisioning-tutorial.md)
+- [Ardoq](../saas-apps/ardoq-provisioning-tutorial.md)
+- [Torii](../saas-apps/torii-provisioning-tutorial.md)
++
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+### General Availability - Workload identity Federation for Managed Identities
+
+**Type:** New feature
+**Service category:** Managed identities for Azure resources
+**Product capability:** Developer Experience
+
+Workload Identity Federation enables developers to use managed identities for their software workloads running anywhere and access Azure resources without needing secrets. Key scenarios include:
+- Accessing Azure resources from Kubernetes pods running in any cloud or on-premises
+- GitHub workflows to deploy to Azure, no secrets necessary
+- Accessing Azure resources from other cloud platforms that support OIDC, such as Google Cloud Platform.
+
+For more information, see:
+- [Workload identity federation](../workload-identities/workload-identity-federation.md).
+- [Configure a user-assigned managed identity to trust an external identity provider (preview)](../workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md)
+- [Use Azure AD workload identity (preview) with Azure Kubernetes Service (AKS)](../../aks/workload-identity-overview.md)
+++
+### Public Preview - New My Groups Experience
+
+**Type:** Changed feature
+**Service category:** Group Management
+**Product capability:** End User Experiences
+
+A new and improved My Groups experience is now available at https://www.myaccount.microsoft.com/groups. My Groups enables end users to easily manage groups, such as finding groups to join, managing groups they own, and managing existing group memberships. Based on customer feedback, the new My Groups support sorting and filtering on lists of groups and group members, a full list of group members in large groups, and an actionable overview page for membership requests.
+This experience replaces the existing My Groups experience at https://www.mygroups.microsoft.com in May.
++
+For more information, see: [Update your Groups info in the My Apps portal](https://support.microsoft.com/account-billing/update-your-groups-info-in-the-my-apps-portal-bc0ca998-6d3a-42ac-acb8-e900fb1174a4).
+++
+### Public preview - Customize tokens with Custom Claims Providers
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** Extensibility
+
+A custom claims provider lets you call an API and map custom claims into the token during the authentication flow. The API call is made after the user has completed all their authentication challenges, and a token is about to be issued to the app. For more information, see: [Custom authentication extensions (preview)](../develop/custom-claims-provider-overview.md).
+++
+### General Availability - Converged Authentication Methods
+
+**Type:** New feature
+**Service category:** MFA
+**Product capability:** User Authentication
+
+The Converged Authentication Methods Policy enables you to manage all authentication methods used for MFA and SSPR in one policy, migrate off the legacy MFA and SSPR policies, and target authentication methods to groups of users instead of enabling them for all users in your tenant. For more information, see: [Manage authentication methods](../authentication/concept-authentication-methods-manage.md).
+++
+### General Availability - Provisioning Insights Workbook
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Monitoring & Reporting
+
+This new workbook makes it easier to investigate and gain insights into your provisioning workflows in a given tenant. This includes HR-driven provisioning, cloud sync, app provisioning, and cross-tenant sync.
+
+Some key questions this workbook can help answer are:
+
+- How many identities have been synced in a given time range?
+- How many create, delete, update, or other operations were performed?
+- How many operations were successful, skipped, or failed?
+- What specific identities failed? And what step did they fail on?
+- For any given user, what tenants / applications were they provisioned or deprovisioned to?
+
+For more information, see: [Provisioning insights workbook](../app-provisioning/provisioning-workbook.md).
+++
+### General Availability - Number Matching for Microsoft Authenticator notifications
+
+**Type:** Plan for Change
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Microsoft Authenticator appΓÇÖs number matching feature has been Generally Available since Nov 2022! If you haven't already used the rollout controls (via Azure portal Admin UX and MSGraph APIs) to smoothly deploy number matching for users of Microsoft Authenticator push notifications, we highly encourage you to do so. We previously announced that we'll remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting February 27, 2023. After listening to customers, we'll extend the availability of the rollout controls for a few more weeks. Organizations can continue to use the existing rollout controls until May 8, 2023, to deploy number matching in their organizations. Microsoft services will start enforcing the number matching experience for all users of Microsoft Authenticator push notifications after May 8, 2023. We'll also remove the rollout controls for number matching after that date.
+
+If customers donΓÇÖt enable number match for all Microsoft Authenticator push notifications prior to May 8, 2023, Authenticator users may experience inconsistent sign-ins while the services are rolling out this change. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
+
+For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md)
+++
+### Public Preview - IPv6 coming to Azure AD
+
+**Type:** Plan for Change
+**Service category:** Identity Protection
+**Product capability:** Platform
+
+Earlier, we announced our plan to bring IPv6 support to Microsoft Azure Active Directory (Azure AD), enabling our customers to reach the Azure AD services over IPv4, IPv6 or dual stack endpoints. This is just a reminder that we have started introducing IPv6 support into Azure AD services in a phased approach in late March 2023.
+
+If you utilize Conditional Access or Identity Protection, and have IPv6 enabled on any of your devices, you likely must take action to avoid impacting your users. For most customers, IPv4 won't completely disappear from their digital landscape, so we aren't planning to require IPv6 or to deprioritize IPv4 in any Azure AD features or services. We'll continue to share additional guidance on IPv6 enablement in Azure AD at this link: [IPv6 support in Azure Active Directory](https://learn.microsoft.com/troubleshoot/azure/active-directory/azure-ad-ipv6-support)
+++
+### General Availability - Microsoft cloud settings for Azure AD B2B
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds:
+
+- Microsoft Azure commercial and Microsoft Azure Government
+- Microsoft Azure commercial and Microsoft Azure China 21Vianet
+
+For more information about Microsoft cloud settings for B2B collaboration., see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
+++
+### Modernizing Terms of Use Experiences
+
+**Type:** Plan for Change
+**Service category:** Access Reviews
+**Product capability:** AuthZ/Access Delegation
+
+Starting July 2023, we're modernizing the following Terms of Use end user experiences with an updated PDF viewer, and moving the experiences from https://account.activedirectory.windowsazure.com to https://myaccount.microsoft.com:
+- View previously accepted terms of use.
+- Accept or decline terms of use as part of the sign-in flow.
+
+No functionalities will be removed. The new PDF viewer adds functionality and the limited visual changes in the end-user experiences will be communicated in a future update. If your organization has allow-listed only certain domains, you must ensure your allowlist includes the domains ΓÇÿmyaccount.microsoft.comΓÇÖ and ΓÇÿ*.myaccount.microsoft.comΓÇÖ for Terms of Use to continue working as expected.
+++ ## February 2023 ### General Availability - Expanding Privileged Identity Management Role Activation across the Azure portal
Privileged Identity Management (PIM) role activation has been expanded to the Bi
For more information Microsoft cloud settings, see: [Activate my Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-activate-your-roles.md). - ### General Availability - Follow Azure AD best practices with recommendations
For listing your application in the Azure AD app gallery, read the details here
**Service category:** Other **Product capability:** Developer Experience
-As part of our ongoing initiative to improve the developer experience, service reliability, and security of customer applications, we'll end support for the Microsoft Authentication Library (ADAL). The final deadline to migrate your applications to Microsoft Authentication Library (MSAL) has been extended to **June 30, 2023**.
+As part of our ongoing initiative to improve the developer experience, service reliability, and security of customer applications, we'll end support for the Azure Active Directory Authentication Library (ADAL). The final deadline to migrate your applications to Azure Active Directory Authentication Library (MSAL) has been extended to **June 30, 2023**.
### Why are we doing this?
-As we consolidate and evolve the Microsoft Identity platform, we're also investing in making significant improvements to the developer experience and service features that make it possible to build secure, robust and resilient applications. To make these features available to our customers, we needed to update the architecture of our software development kits. As a result of this change, weΓÇÖve decided that the path forward requires us to sunset Azure Active Directory Authentication Library. This allows us to focus on developer experience investments with Microsoft Authentication Library.
+As we consolidate and evolve the Microsoft Identity platform, we're also investing in making significant improvements to the developer experience and service features that make it possible to build secure, robust and resilient applications. To make these features available to our customers, we needed to update the architecture of our software development kits. As a result of this change, weΓÇÖve decided that the path forward requires us to sunset Azure Active Directory Authentication Library. This allows us to focus on developer experience investments with Azure Active Directory Authentication Library.
### What happens?
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md
To enable self-service application access to an application, follow the steps be
1. In the left navigation menu, select **Self-service**. > [!NOTE]
- > The **Self-service** menu item isn't available if your app registration's setting for public client flows is enabled. To access this menu item, select **Authentication** in the left navigation, then set the **Allow public client flows** to **No**.
+ > The **Self-service** menu item isn't available if the corresponding app registration's setting for public client flows is enabled. To access this setting, the app registration needs to exist in your tenant. Locate the app registration, select **Authentication** in the left navigation, then locate **Allow public client flows**.
1. To enable Self-service application access for this application, set **Allow users to request access to this application?** to **Yes.** 1. Next to **To which group should assigned users be added?**, select **Select group**. Choose a group, and then select **Select**. When a user's request is approved, they'll be added to this group. When viewing this group's membership, you'll be able to see who has been granted access to the application through self-service access.
active-directory Delegate App Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-app-roles.md
Previously updated : 11/04/2020 Last updated : 03/30/2023
Tips when creating and using custom roles for delegating application management:
For more information on the basics of custom roles, see the [custom roles overview](custom-overview.md), as well as how to [create a custom role](custom-create.md) and how to [assign a role](custom-assign-powershell.md).
+## Troubleshoot
+
+### Symptom - Access denied when you try to register an application
+
+When you try to register an application in Azure AD, you get a message similar to the following:
+
+```
+Access denied
+You do not have access
+You don't have permission to register applications in the <directoryName> directory. To request access, contact your administrator.
+```
++
+**Cause**
+
+You can't register the application in the directory because your directory administrator has [restricted who can create applications](#restrict-who-can-create-applications).
+
+**Solution**
+
+Contact your administrator to do one of the following:
+
+- Grant you permissions to create and consent to applications by [assigning you the Application Developer role](#grant-individual-permissions-to-create-and-consent-to-applications-when-the-default-ability-is-disabled).
+- Create the application registration for you and [assign you as the application owner](#assign-application-owners).
+ ## Next steps - [Application registration subtypes and permissions](custom-available-permissions.md)
active-directory Github Enterprise Cloud Enterprise Account Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-cloud-enterprise-account-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with GitHub Enterprise Cloud - Enterprise Account'
+ Title: 'Tutorial: Azure Active Directory SSO integration with GitHub Enterprise Cloud - Enterprise Account'
description: Learn how to configure single sign-on between Azure Active Directory and GitHub Enterprise Cloud - Enterprise Account.
Previously updated : 11/21/2022 Last updated : 03/29/2023
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with GitHub Enterprise Cloud - Enterprise Account
+# Tutorial: Azure Active Directory SSO integration with GitHub Enterprise Cloud - Enterprise Account
-In this tutorial, you'll learn how to integrate GitHub Enterprise Cloud - Enterprise Account with Azure Active Directory (Azure AD). When you integrate GitHub Enterprise Cloud - Enterprise Account with Azure AD, you can:
+In this tutorial, you learn how to setup an Azure Active Directory (Azure AD) SAML integration with a GitHub Enterprise Cloud - Enterprise Account. When you integrate GitHub Enterprise Cloud - Enterprise Account with Azure AD, you can:
* Control in Azure AD who has access to a GitHub Enterprise Account and any organizations within the Enterprise Account. - ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
## Scenario description
-In this tutorial, you configure and test Azure AD SSO in a test environment.
+In this tutorial, you will configure a SAML integration for a GitHub Enterprise Account, and test enterprise account owner and enterprise/organization member authentication and access.
+
+> [!NOTE]
+> The GitHub `Enterprise Cloud - Enterprise Account` application does not support enabling [automatic SCIM provisioning](../fundamentals/sync-scim.md). If you need to setup provisioning for your GitHub Enterprise Cloud environment, SAML must be configured at the organization level and the `GitHub Enterprise Cloud - Organization` Azure AD application must be used instead. If you are setting up a SAML and SCIM provisioning integration for an enterprise that is enabled for [Enterprise Managed Users (EMUs)](https://docs.github.com/enterprise-cloud@latest/admin/identity-and-access-management/using-enterprise-managed-users-for-iam/about-enterprise-managed-users), then you must use the `GitHub Enterprise Managed User` Azure AD application for SAML/Provisioning integrations or the `GitHub Enterprise Managed User (OIDC)` Azure AD application for OIDC/Provisioning integrations.
* GitHub Enterprise Cloud - Enterprise Account supports **SP** and **IDP** initiated SSO.
-* GitHub Enterprise Cloud - Enterprise Account supports **Just In Time** user provisioning.
## Adding GitHub Enterprise Cloud - Enterprise Account from the gallery
To configure the integration of GitHub Enterprise Cloud - Enterprise Account int
1. In the **Add from the gallery** section, type **GitHub Enterprise Cloud - Enterprise Account** in the search box. 1. Select **GitHub Enterprise Cloud - Enterprise Account** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
## Configure and test Azure AD SSO for GitHub Enterprise Cloud - Enterprise Account
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://github.com/enterprises/<ENTERPRISE-SLUG>`
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type a URL using the following pattern: `https://github.com/enterprises/<ENTERPRISE-SLUG>/saml/consume`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
In the **Sign on URL** text box, type a URL using the following pattern: `https://github.com/enterprises/<ENTERPRISE-SLUG>/sso`
active-directory Revspace Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/revspace-tutorial.md
+
+ Title: Azure Active Directory SSO integration with RevSpace
+description: Learn how to configure single sign-on between Azure Active Directory and RevSpace.
++++++++ Last updated : 03/28/2023+++
+# Tutorial: Azure Active Directory SSO integration with RevSpace
+
+In this tutorial, you learn how to integrate RevSpace with Azure Active Directory (Azure AD). When you integrate RevSpace with Azure AD, you can:
+
+* Control in Azure AD who has access to RevSpace.
+* Enable your users to be automatically signed-in to RevSpace with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* RevSpace single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* RevSpace supports **SP and IDP** initiated SSO.
+* RevSpace supports **Just In Time** user provisioning.
+
+## Adding RevSpace from the gallery
+
+To configure the integration of RevSpace into Azure AD, you need to add RevSpace from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **RevSpace** in the search box.
+1. Select **RevSpace** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for RevSpace
+
+Configure and test Azure AD SSO with RevSpace using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in RevSpace.
+
+To configure and test Azure AD SSO with RevSpace, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure RevSpace SSO](#configure-revspace-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create RevSpace test user](#create-revspace-test-user)** - to have a counterpart of B.Simon in RevSpace that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **RevSpace** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
+
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [RevSpace Client support team](mailto:support@revspace.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. RevSpace application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, RevSpace application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | - | |
+ | Firstname | user.givenname |
+ | Lastname | user.surname |
+ | jobtitle | user.jobtitle |
+ | department | user.department |
+ | employeeid | user.employeeid |
+ | postalcode | user.postalcode |
+ | country | user.country |
+ | role | user.assignedroles |
+
+ > [!NOTE]
+ > RevSpace expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui).
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up RevSpace** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you enable B.Simon to use Azure single sign-on by granting access to RevSpace.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **RevSpace**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure RevSpace SSO
+
+1. In a different web browser window, sign into RevSpace as an administrator.
+
+1. Click on user Profile icon, then select **Company settings**.
+
+ ![Screenshot of company settings in RevSpace.](./media/teamzskill-tutorial/settings.png)
+
+1. Perform the following steps in **Settings** page.
+
+ ![Screenshot of settings in RevSpace.](./media/teamzskill-tutorial/metadata.png)
+
+ a. Navigate to **Company > Single Sign-On**, then select the **Metadata Upload** tab.
+
+ b. Paste the **Federation Metadata XML** Value, which you've copied from the Azure portal into **XML Metadata** field.
+
+ c. Then click **Save**.
+
+### Create RevSpace test user
+
+In this section, a user called B.Simon is created in RevSpace. RevSpace supports just-in-time provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in RevSpace, a new one is created when you attempt to access RevSpace.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to RevSpace Sign-on URL where you can initiate the login flow.
+
+* Go to RevSpace Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the RevSpace for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the RevSpace tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the RevSpace for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure RevSpace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Signiant Media Shuttle Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/signiant-media-shuttle-tutorial.md
Previously updated : 03/13/2023 Last updated : 03/29/2023 # Azure Active Directory SSO integration with Signiant Media Shuttle
-In this article, you learn how to integrate Signiant Media Shuttle with Azure Active Directory (Azure AD). Media Shuttle is a solution for securely moving large files and data sets to, and from, cloud-based or on-premises storage. Transfers are accelerated and can be up to 100 s of times faster than FTP. When you integrate Signiant Media Shuttle with Azure AD, you can:
+In this article, you learn how to integrate Signiant Media Shuttle with Azure Active Directory (Azure AD). Media Shuttle is a solution for securely moving large files and data sets to, and from, cloud-based or on-premises storage. Transfers are accelerated and can be up to hundreds of times faster than FTP.
+
+When you integrate Signiant Media Shuttle with Azure AD, you can:
* Control in Azure AD who has access to Signiant Media Shuttle. * Enable your users to be automatically signed-in to Signiant Media Shuttle with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-You need to configure and test Azure AD single sign-on for Signiant Media Shuttle in a test environment. Signiant Media Shuttle supports only **SP** initiated single sign-on and **Just In Time** user provisioning.
+You must configure and test Azure AD single sign-on for Signiant Media Shuttle in a test environment. Signiant Media Shuttle supports **SP** initiated single sign-on and **Just In Time** user provisioning.
## Prerequisites
To integrate Azure Active Directory with Signiant Media Shuttle, you need:
* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Signiant Media Shuttle single sign-on (SSO) enabled subscription.
+* A Signiant Media Shuttle subscription with a SAML Web SSO license, and access to the IT and Operations Administration Consoles.
## Add application and assign a test user
Before you begin the process of configuring single sign-on, you need to add the
### Add Signiant Media Shuttle from the Azure AD gallery
-Add Signiant Media Shuttle from the Azure AD application gallery to configure single sign-on with Signiant Media Shuttle. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+Add Signiant Media Shuttle from the Azure AD application gallery to configure single sign-on for Signiant Media Shuttle. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
### Create and assign Azure AD test user Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+You can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
## Configure Azure AD SSO
Complete the following steps to enable Azure AD single sign-on in the Azure port
a. In the **Identifier** textbox, type a value or URL using one of the following patterns:
- | **Identifier** |
- ||
- | `https://<PORTALNAME>.mediashuttle.com` |
- | `mediashuttle` |
+ | **Configuration Type** | **Identifier** |
+ | -- | -- |
+ | Account Level | `mediashuttle` |
+ | Portal Level | `https://<PORTALNAME>.mediashuttle.com` |
b. In the **Reply URL** textbox, type a URL using one of the following patterns:
- | **Reply URL**|
- ||
- | `https://portals.mediashuttle.com/auth` |
- | `https://<PORTALNAME>.mediashuttle.com/auth` |
+ | **Configuration Type** | **Reply URL** |
+ | -- | -- |
+ | Account Level |`https://portals.mediashuttle.com.auth` |
+ | Portal Level | `https://<PORTALNAME>.mediashuttle.com/auth`|
- c. In the **Sign on URL** textbox, type a URL using one of the following patterns:
+ c. In the **Sign on URL** textbox, type a URL using one of the following patterns:
- | **Sign on URL**|
- ||
- | `https://portals.mediashuttle.com/auth` |
- | `https://<PORTALNAME>.mediashuttle.com/auth` |
+ | **Configuration Type** | **Sign on URL** |
+ | - | -- |
+ | Account Level | `https://portals.mediashuttle.com/auth` |
+ | Portal Level | `https://<PORTALNAME>.mediashuttle.com/auth` |
> [!Note] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Signiant Media Shuttle support team](mailto:support@signiant.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure Signiant Media Shuttle SSO
-To configure single sign-on on **Signiant Media Shuttle** side, you need to send the **App Federation Metadata Url** to [Signiant Media Shuttle support team](mailto:support@signiant.com). They set this setting to have the SAML SSO connection set properly on both sides.
+Once you have the **App Federation Metadata Url**, sign in to the Media Shuttle IT Administration Console.
+
+To add Azure AD Metadata in Media Shuttle:
+
+1. Log into your IT Administration Console.
+
+2. On the Security page, in the Identity Provider Metadata field, paste the **App Federation Metadata Url** which you've copied from the Azure portal.
+
+3. Click **Save**.
+
+Once you have set up Azure AD for Media Shuttle, assigned users and groups can sign in to Media Shuttle portals through single sign-on using Azure AD authentication.
### Create Signiant Media Shuttle test user In this section, a user called Britta Simon is created in Signiant Media Shuttle. Signiant Media Shuttle supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Signiant Media Shuttle, a new one is created after authentication.
+If **Auto-add SAML authenticated members to this portal** is not enabled as part of the SAML configuration, you must add users through the **Portal Administration** console at `https://<PORTALNAME>.mediashuttle.com/admin`.
+ ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
The callback endpoint is called when a user scans the QR code, uses the deep lin
| `requestStatus` |string |The status returned when the request was retrieved by the authenticator app. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the presentation flow.</li><li>`presentation_verified`: The verifiable credential validation completed successfully.</li></ul> | | `state` |string| Returns the state value that you passed in the original payload. | | `subject`|string | The verifiable credential user DID.|
-| `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuer's domain. </li><li>The verifiable credential issuer's domain validation status. </li></ul> |
+| `verifiedCredentialsData`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuer's domain. </li><li>The verifiable credential issuer's domain validation status. </li></ul> |
| `receipt`| string | Optional. The receipt contains the original payload sent from the wallet to the Verifiable Credentials service. The receipt should be used for troubleshooting/debugging only. The format in the receipt isn't fix and can change based on the wallet and version used.| The following example demonstrates a callback payload when the authenticator app starts the presentation request:
advisor Advisor Azure Resource Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-azure-resource-graph.md
Advisor resources are now onboarded to [Azure Resource Graph](https://azure.microsoft.com/features/resource-graph/). This lays foundation to many at-scale customer scenarios for Advisor recommendations. Few scenarios that were not possible before to do at scale and now can be achieved using Resource Graph are: * Gives capability to perform complex query for all your subscriptions in Azure portal
-* Recommendations summarized by category types ( like reliability, performance) and impact types (high, medium, low)
+* Recommendations summarized by category types (like reliability, performance) and impact types (high, medium, low)
* All recommendations for a particular recommendation type * Impacted resource count by recommendation category
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
This section provides guidance for cluster administrators who want to create one
| | **Following parameters are only for blobfuse** | | | | |volumeAttributes.secretName | Secret name that stores storage account name and key (only applies for SMB).| | No || |volumeAttributes.secretNamespace | Specify namespace of secret to store account key. | `default` | No | Pvc namespace|
-|nodeStageSecretRef.name | Specify secret name that stores one of the following:<br> `azurestorageaccountkey`<br>`azurestorageaccountsastoken`<br>`msisecret`<br>`azurestoragespnclientsecret`. | |Existing Kubernetes secret name | No |
+|nodeStageSecretRef.name | Specify secret name that stores one of the following:<br> `azurestorageaccountkey`<br>`azurestorageaccountsastoken`<br>`msisecret`<br>`azurestoragespnclientsecret`. | | No |Existing Kubernetes secret name |
|nodeStageSecretRef.namespace | Specify the namespace of secret. | Kubernetes namespace | Yes || | | **Following parameters are only for NFS protocol** | | | | |volumeAttributes.mountPermissions | Specify mounted folder permissions. | `0777` | No ||
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
az keyvault set-policy -n myKeyVaultName -g myResourceGroup --object-id $desIden
Create a **new resource group** and AKS cluster, then use your key to encrypt the OS disk. > [!IMPORTANT]
-> Ensure you create a new resoruce group for your AKS cluster
+> Ensure you create a new resource group for your AKS cluster
```azurecli-interactive # Retrieve the DiskEncryptionSet value and set a variable
someuser@Azure:~$ az account list
] ```
-Create a file called **byok-azure-disk.yaml** that contains the following information. Replace myAzureSubscriptionId, myResourceGroup, and myDiskEncrptionSetName with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed. If you use the Azure Cloud Shell, this file can be created using vi or nano as if working on a virtual or physical system:
+Create a file called **byok-azure-disk.yaml** that contains the following information. Replace *myAzureSubscriptionId*, *myResourceGroup*, and *myDiskEncrptionSetName* with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed.
-```
+```yaml
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
az aks create \
--docker-bridge-address 172.17.0.1/16 \ --vnet-subnet-id $SUBNET_ID ```
-* The *--service-cidr* is optional. This address is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+* The *--service-cidr* is optional. This address is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection. The default value is 10.0.0.0/16.
-* The *--dns-service-ip* is optional. The address should be the *.10* address of your service IP address range.
+* The *--dns-service-ip* is optional. The address should be the *.10* address of your service IP address range. The default value is 10.0.0.10.
-* The *--pod-cidr* is optional. This address should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+* The *--pod-cidr* is optional. This address should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection. The default value is 10.244.0.0/16.
* This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed if you need more addresses for additional nodes. * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*. * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
-* The *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network.
+* The *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16.
> [!Note] > If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
az aks create \
--node-count 3 \ --network-plugin kubenet \ --vnet-subnet-id $SUBNET_ID \
- --enable-managed-identity \
--assign-identity <identity-resource-id> ```
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
Customizing your node configuration allows you to configure or tune your operating system (OS) settings or the kubelet parameters to match the needs of the workloads. When you create an AKS cluster or add a node pool to your cluster, you can customize a subset of commonly used OS and kubelet settings. To configure settings beyond this subset, [use a daemon set to customize your needed configurations without losing AKS support for your nodes](support-policies.md#shared-responsibility).
-## Use custom node configuration
+## Create an AKS cluster with a customized node configuration
-### Kubelet custom configuration
-The supported Kubelet parameters and accepted values are listed below.
+### Prerequisites for Windows kubelet custom configuration (Preview)
-| Parameter | Allowed values/interval | Default | Description |
-| | -- | - | -- |
-| `cpuManagerPolicy` | none, static | none | The static policy allows containers in [Guaranteed pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) with integer CPU requests access to exclusive CPUs on the node. |
-| `cpuCfsQuota` | true, false | true | Enable/Disable CPU CFS quota enforcement for containers that specify CPU limits. |
-| `cpuCfsQuotaPeriod` | Interval in milliseconds (ms) | `100ms` | Sets CPU CFS quota period value. |
-| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
-| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |
-| `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). |
-| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. |
-| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 MB | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
-| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
-| `podMaxPids` | -1 to kernel PID limit | -1 (∞)| The maximum amount of process IDs that can be running in a Pod |
-
-### Linux OS custom configuration
-
-The supported OS settings and accepted values are listed below.
-
-#### File handle limits
-
-When you're serving a lot of traffic, it's common that the traffic you're serving is coming from a large number of local files. You can tweak the below kernel settings and built-in limits to allow you to handle more, at the cost of some system memory.
-
-| Setting | Allowed values/interval | Default | Description |
-| - | -- | - | -- |
-| `fs.file-max` | 8192 - 12000500 | 709620 | Maximum number of file-handles that the Linux kernel will allocate, by increasing this value you can increase the maximum number of open files permitted. |
-| `fs.inotify.max_user_watches` | 781250 - 2097152 | 1048576 | Maximum number of file watches allowed by the system. Each *watch* is roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel. |
-| `fs.aio-max-nr` | 65536 - 6553500 | 65536 | The aio-nr shows the current system-wide number of asynchronous io requests. aio-max-nr allows you to change the maximum value aio-nr can grow to. |
-| `fs.nr_open` | 8192 - 20000500 | 1048576 | The maximum number of file-handles a process can allocate. |
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+First, install the aks-preview extension by running the following command:
-#### Socket and network tuning
-
-For agent nodes, which are expected to handle very large numbers of concurrent sessions, you can use the subset of TCP and network options below that you can tweak per node pool.
-
-| Setting | Allowed values/interval | Default | Description |
-| - | -- | - | -- |
-| `net.core.somaxconn` | 4096 - 3240000 | 16384 | Maximum number of connection requests that can be queued for any given listening socket. An upper limit for the value of the backlog parameter passed to the [listen(2)](http://man7.org/linux/man-pages/man2/listen.2.html) function. If the backlog argument is greater than the `somaxconn`, then it's silently truncated to this limit.
-| `net.core.netdev_max_backlog` | 1000 - 3240000 | 1000 | Maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them. |
-| `net.core.rmem_max` | 212992 - 134217728 | 212992 | The maximum receive socket buffer size in bytes. |
-| `net.core.wmem_max` | 212992 - 134217728 | 212992 | The maximum send socket buffer size in bytes. |
-| `net.core.optmem_max` | 20480 - 4194304 | 20480 | Maximum ancillary buffer size (option memory buffer) allowed per socket. Socket option memory is used in a few cases to store extra structures relating to usage of the socket. |
-| `net.ipv4.tcp_max_syn_backlog` | 128 - 3240000 | 16384 | The maximum number of queued connection requests that have still not received an acknowledgment from the connecting client. If this number is exceeded, the kernel will begin dropping requests. |
-| `net.ipv4.tcp_max_tw_buckets` | 8000 - 1440000 | 32768 | Maximal number of `timewait` sockets held by system simultaneously. If this number is exceeded, time-wait socket is immediately destroyed and warning is printed. |
-| `net.ipv4.tcp_fin_timeout` | 5 - 120 | 60 | The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it's aborted at the local end. |
-| `net.ipv4.tcp_keepalive_time` | 30 - 432000 | 7200 | How often TCP sends out `keepalive` messages when `keepalive` is enabled. |
-| `net.ipv4.tcp_keepalive_probes` | 1 - 15 | 9 | How many `keepalive` probes TCP sends out, until it decides that the connection is broken. |
-| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
-| `net.ipv4.tcp_tw_reuse` | 0 or 1 | 0 | Allow to reuse `TIME-WAIT` sockets for new connections when it's safe from protocol viewpoint. |
-| `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65000] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. |
-| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. |
-| `net.ipv4.neigh.default.gc_thresh2`| 512 - 90000 | 8192 | Soft maximum number of entries that may be in the ARP cache. This setting is arguably the most important, as ARP garbage collection will be triggered about 5 seconds after reaching this soft maximum. |
-| `net.ipv4.neigh.default.gc_thresh3`| 1024 - 100000 | 16384 | Hard maximum number of entries in the ARP cache. |
-| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
-| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. |
+```azurecli
+az extension add --name aks-preview
+```
-#### Worker limits
+Run the following command to update to the latest version of the extension released:
-Like file descriptor limits, the number of workers or threads that a process can create are limited by both a kernel setting and user limits. The user limit on AKS is unlimited.
+```azurecli
+az extension update --name aks-preview
+```
-| Setting | Allowed values/interval | Default | Description |
-| - | -- | - | -- |
-| `kernel.threads-max` | 20 - 513785 | 55601 | Processes can spin up worker threads. The maximum number of all threads that can be created is set with the kernel setting `kernel.threads-max`. |
+Then register the `WindowsCustomKubeletConfigPreview` feature flag by using the [`az feature register`][az-feature-register] command, as shown in the following example:
-#### Virtual memory
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview"
+```
-The settings below can be used to tune the operation of the virtual memory (VM) subsystem of the Linux kernel and the `writeout` of dirty data to disk.
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [`az feature show`][az-feature-show] command:
-| Setting | Allowed values/interval | Default | Description |
-| - | -- | - | -- |
-| `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. |
-| `vm.vfs_cache_pressure` | 1 - 500 | 100 | This percentage value controls the tendency of the kernel to reclaim the memory, which is used for caching of directory and inode objects. |
-| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. |
-| `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. |
-| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. |
-| `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. |
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview"
+```
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [`az provider register`][az-provider-register] command:
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
-> [!IMPORTANT]
-> For ease of search and readability the OS settings are displayed in this document by their name but should be added to the configuration json file or AKS API using [camelCase capitalization convention](/dotnet/standard/design-guidelines/capitalization-conventions).
+### Create config files for kubelet configuration, OS configuration, or both
-Create a `kubeletconfig.json` file with the following contents:
+Create a `linuxkubeletconfig.json` file with the following contents (for Linux node pools):
```json {
Create a `kubeletconfig.json` file with the following contents:
"failSwapOn": false } ```
-Create a `linuxosconfig.json` file with the following contents:
+> [!NOTE]
+> Windows kubelet custom configuration only supports the parameters `imageGcHighThreshold`, `imageGcLowThreshold`, `containerLogMaxSizeMB`, and `containerLogMaxFiles`. The json file contents above should be modified to remove any unsupported parameters.
+
+Create a `windowskubeletconfig.json` file with the following contents (for Windows node pools):
+
+```json
+{
+ "imageGcHighThreshold": 90,
+ "imageGcLowThreshold": 70,
+ "containerLogMaxSizeMB": 20,
+ "containerLogMaxFiles": 6
+}
+```
+
+Create a `linuxosconfig.json` file with the following contents (for Linux node pools only):
```json {
Create a `linuxosconfig.json` file with the following contents:
} ```
-Create a new cluster specifying the kubelet and OS configurations using the JSON files created in the previous step.
+### Create a new cluster using custom configuration files
+
+When creating a new cluster, you can use the customized configuration files created in the previous step to specify the kubelet configuration, OS configuration, or both. Since the first node pool created with az aks create is a linux node pool in all cases, you should use the `linuxkubeletconfig.json` and `linuxosconfig.json` files.
> [!NOTE]
-> When you create a cluster, you can specify the kubelet configuration, OS configuration, or both. If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. CustomKubeletConfig or CustomLinuxOsConfig isn't supported for OS type: Windows.
+> If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. CustomLinuxOsConfig isn't supported for OS type: Windows.
```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup --kubelet-config ./kubeletconfig.json --linux-os-config ./linuxosconfig.json
+az aks create --name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json --linux-os-config ./linuxosconfig.json
```
+### Add a node pool using custom configuration files
-Add a new node pool specifying the Kubelet parameters using the JSON file you created.
+When adding a node pool to a cluster, you can use the customized configuration file created in the previous step to specify the kubelet configuration. CustomKubeletConfig is supported for Linux and Windows node pools.
> [!NOTE]
-> When you add a node pool to an existing cluster, you can specify the kubelet configuration, OS configuration, or both. If you specify a configuration when adding a node pool, only the nodes in the new node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value.
+> When you add a Linux node pool to an existing cluster, you can specify the kubelet configuration, OS configuration, or both. When you add a Windows node pool to an existing cluster, you can only specify the kubelet configuration. If you specify a configuration when adding a node pool, only the nodes in the new node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value.
+
+For Linux node pools
+
+```azurecli
+az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json
+```
+For Windows node pools (Preview)
```azurecli
-az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./kubeletconfig.json
+az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --os-type Windows --kubelet-config ./windowskubeletconfig.json
```
-## Other configuration
+### Other configurations
-The settings below can be used to modify other Operating System settings.
+These settings can be used to modify other operating system settings.
-### Message of the Day
+#### Message of the Day
-Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation.
+Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation.
-#### Cluster creation
+##### Cluster creation
```azurecli az aks create --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt ```
-#### Nodepool creation
+##### Nodepool creation
```azurecli az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt ```
-## Confirm settings have been applied
+### Confirm settings have been applied
After you have applied custom node configuration, you can confirm the settings have been applied to the nodes by [connecting to the host][node-access] and verifying `sysctl` or configuration changes have been made on the filesystem.
+## Custom node configuration supported parameters
+
+## Kubelet custom configuration
+
+Kubelet custom configuration is supported for Linux and Windows node pools. Supported parameters differ and are documented below.
+
+### Linux Kubelet custom configuration
+
+The supported Kubelet parameters and accepted values for Linux node pools are listed below.
+
+| Parameter | Allowed values/interval | Default | Description |
+| | -- | - | -- |
+| `cpuManagerPolicy` | none, static | none | The static policy allows containers in [Guaranteed pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) with integer CPU requests access to exclusive CPUs on the node. |
+| `cpuCfsQuota` | true, false | true | Enable/Disable CPU CFS quota enforcement for containers that specify CPU limits. |
+| `cpuCfsQuotaPeriod` | Interval in milliseconds (ms) | `100ms` | Sets CPU CFS quota period value. |
+| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
+| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |
+| `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). |
+| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. |
+| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
+| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
+| `podMaxPids` | -1 to kernel PID limit | -1 (∞)| The maximum amount of process IDs that can be running in a Pod |
+
+### Windows Kubelet custom configuration (Preview)
+
+The supported Kubelet parameters and accepted values for Windows node pools are listed below.
+
+| Parameter | Allowed values/interval | Default | Description |
+| | -- | - | -- |
+| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
+| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |
+| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
+| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
+
+## Linux OS custom configuration
+
+The supported OS settings and accepted values are listed below.
+
+### File handle limits
+
+When you're serving a lot of traffic, it's common that the traffic you're serving is coming from a large number of local files. You can tweak the below kernel settings and built-in limits to allow you to handle more, at the cost of some system memory.
+
+| Setting | Allowed values/interval | Default | Description |
+| - | -- | - | -- |
+| `fs.file-max` | 8192 - 12000500 | 709620 | Maximum number of file-handles that the Linux kernel will allocate, by increasing this value you can increase the maximum number of open files permitted. |
+| `fs.inotify.max_user_watches` | 781250 - 2097152 | 1048576 | Maximum number of file watches allowed by the system. Each *watch* is roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel. |
+| `fs.aio-max-nr` | 65536 - 6553500 | 65536 | The aio-nr shows the current system-wide number of asynchronous io requests. aio-max-nr allows you to change the maximum value aio-nr can grow to. |
+| `fs.nr_open` | 8192 - 20000500 | 1048576 | The maximum number of file-handles a process can allocate. |
+
+### Socket and network tuning
+
+For agent nodes, which are expected to handle very large numbers of concurrent sessions, you can use the subset of TCP and network options below that you can tweak per node pool.
+
+| Setting | Allowed values/interval | Default | Description |
+| - | -- | - | -- |
+| `net.core.somaxconn` | 4096 - 3240000 | 16384 | Maximum number of connection requests that can be queued for any given listening socket. An upper limit for the value of the backlog parameter passed to the [listen(2)](http://man7.org/linux/man-pages/man2/listen.2.html) function. If the backlog argument is greater than the `somaxconn`, then it's silently truncated to this limit.
+| `net.core.netdev_max_backlog` | 1000 - 3240000 | 1000 | Maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them. |
+| `net.core.rmem_max` | 212992 - 134217728 | 212992 | The maximum receive socket buffer size in bytes. |
+| `net.core.wmem_max` | 212992 - 134217728 | 212992 | The maximum send socket buffer size in bytes. |
+| `net.core.optmem_max` | 20480 - 4194304 | 20480 | Maximum ancillary buffer size (option memory buffer) allowed per socket. Socket option memory is used in a few cases to store extra structures relating to usage of the socket. |
+| `net.ipv4.tcp_max_syn_backlog` | 128 - 3240000 | 16384 | The maximum number of queued connection requests that have still not received an acknowledgment from the connecting client. If this number is exceeded, the kernel will begin dropping requests. |
+| `net.ipv4.tcp_max_tw_buckets` | 8000 - 1440000 | 32768 | Maximal number of `timewait` sockets held by system simultaneously. If this number is exceeded, time-wait socket is immediately destroyed and warning is printed. |
+| `net.ipv4.tcp_fin_timeout` | 5 - 120 | 60 | The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it's aborted at the local end. |
+| `net.ipv4.tcp_keepalive_time` | 30 - 432000 | 7200 | How often TCP sends out `keepalive` messages when `keepalive` is enabled. |
+| `net.ipv4.tcp_keepalive_probes` | 1 - 15 | 9 | How many `keepalive` probes TCP sends out, until it decides that the connection is broken. |
+| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
+| `net.ipv4.tcp_tw_reuse` | 0 or 1 | 0 | Allow to reuse `TIME-WAIT` sockets for new connections when it's safe from protocol viewpoint. |
+| `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65000] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. |
+| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. |
+| `net.ipv4.neigh.default.gc_thresh2`| 512 - 90000 | 8192 | Soft maximum number of entries that may be in the ARP cache. This setting is arguably the most important, as ARP garbage collection will be triggered about 5 seconds after reaching this soft maximum. |
+| `net.ipv4.neigh.default.gc_thresh3`| 1024 - 100000 | 16384 | Hard maximum number of entries in the ARP cache. |
+| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
+| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. |
+
+### Worker limits
+
+Like file descriptor limits, the number of workers or threads that a process can create are limited by both a kernel setting and user limits. The user limit on AKS is unlimited.
+
+| Setting | Allowed values/interval | Default | Description |
+| - | -- | - | -- |
+| `kernel.threads-max` | 20 - 513785 | 55601 | Processes can spin up worker threads. The maximum number of all threads that can be created is set with the kernel setting `kernel.threads-max`. |
+
+### Virtual memory
+
+The settings below can be used to tune the operation of the virtual memory (VM) subsystem of the Linux kernel and the `writeout` of dirty data to disk.
+
+| Setting | Allowed values/interval | Default | Description |
+| - | -- | - | -- |
+| `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. |
+| `vm.vfs_cache_pressure` | 1 - 500 | 100 | This percentage value controls the tendency of the kernel to reclaim the memory, which is used for caching of directory and inode objects. |
+| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. |
+| `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. |
+| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. |
+| `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. |
+
+> [!IMPORTANT]
+> For ease of search and readability the OS settings are displayed in this document by their name but should be added to the configuration json file or AKS API using [camelCase capitalization convention](/dotnet/standard/design-guidelines/capitalization-conventions).
+ ## Next steps - Learn [how to configure your AKS cluster](cluster-configuration.md).
After you have applied custom node configuration, you can confirm the settings h
[az-aks-update]: /cli/azure/aks#az-aks-update [az-aks-scale]: /cli/azure/aks#az-aks-scale [az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
[az-feature-list]: /cli/azure/feature#az-feature-list [az-provider-register]: /cli/azure/provider#az-provider-register [upgrade-cluster]: upgrade-cluster.md
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
To control image versions, you'll want to import them into your own Azure Contai
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
-SOURCE_REGISTRY=k8s.gcr.io
+SOURCE_REGISTRY=registry.k8s.io
CONTROLLER_IMAGE=ingress-nginx/controller CONTROLLER_TAG=v1.2.1 PATCH_IMAGE=ingress-nginx/kube-webhook-certgen
To control image versions, you'll want to import them into your own Azure Contai
```azurepowershell-interactive $RegistryName = "<REGISTRY_NAME>" $ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
-$SourceRegistry = "k8s.gcr.io"
+$SourceRegistry = "registry.k8s.io"
$ControllerImage = "ingress-nginx/controller" $ControllerTag = "v1.2.1" $PatchImage = "ingress-nginx/kube-webhook-certgen"
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
helm repo update
# Install the cert-manager Helm chart helm install cert-manager jetstack/cert-manager \ --namespace ingress-basic \
- --version $CERT_MANAGER_TAG \
+ --version=$CERT_MANAGER_TAG \
--set installCRDs=true \ --set nodeSelector."kubernetes\.io/os"=linux \ --set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \
aks Kubernetes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-portal.md
Title: Access Kubernetes resources from the Azure portal description: Learn how to interact with Kubernetes resources to manage an Azure Kubernetes Service (AKS) cluster from the Azure portal. Previously updated : 12/16/2020 Last updated : 03/30/2023 # Access Kubernetes resources from the Azure portal The Azure portal includes a Kubernetes resource view for easy access to the Kubernetes resources in your Azure Kubernetes Service (AKS) cluster. Viewing Kubernetes resources from the Azure portal reduces context switching between the Azure portal and the `kubectl` command-line tool, streamlining the experience for viewing and editing your Kubernetes resources. The resource viewer currently includes multiple resource types, such as deployments, pods, and replica sets.
-The Kubernetes resource view from the Azure portal replaces the AKS dashboard add-on, which is deprecated.
+The Kubernetes resource view from the Azure portal replaces the deprecated AKS dashboard add-on.
## Prerequisites
-To view Kubernetes resources in the Azure portal, you need an AKS cluster. Any cluster is supported, but if using Azure Active Directory (Azure AD) integration, your cluster must use [AKS-managed Azure AD integration][aks-managed-aad]. If your cluster uses legacy Azure AD, you can upgrade your cluster in the portal or with the [Azure CLI][cli-aad-upgrade]. You can also [use the Azure portal][aks-quickstart-portal] to create a new AKS cluster.
+To view Kubernetes resources in the Azure portal, you need an AKS cluster. Any cluster is supported, but if you're using Azure Active Directory (Azure AD) integration, your cluster must use [AKS-managed Azure AD integration][aks-managed-aad]. If your cluster uses legacy Azure AD, you can upgrade your cluster in the portal or with the [Azure CLI][cli-aad-upgrade]. You can also [use the Azure portal][aks-quickstart-portal] to create a new AKS cluster.
## View Kubernetes resources
To see the Kubernetes resources, navigate to your AKS cluster in the Azure porta
In this example, we'll use our sample AKS cluster to deploy the Azure Vote application from the [AKS quickstart][aks-quickstart-portal].
-1. Select **Add** from any of the resource views (Namespace, Workloads, Services and ingresses, Storage, or Configuration).
-1. Paste the YAML for the Azure Vote application from the [AKS quickstart][aks-quickstart-portal].
-1. Select **Add** at the bottom of the YAML editor to deploy the application.
+1. From the **Services and ingresses** resource view, select **Create** > **Starter application**.
+2. Under **Create a basic web application**, select **Create**.
+3. On the **Application details** page, select **Next**.
+4. On the **Review YAML** page, select **Deploy**.
-Once the YAML file is added, the resource viewer shows both Kubernetes services that were created: the internal service (azure-vote-back), and the external service (azure-vote-front) to access the Azure Vote application. The external service includes a linked external IP address so you can easily view the application in your browser.
+Once the application is deployed, the resource view shows the two Kubernetes
+
+- **azure-vote-back**: The internal service.
+- **azure-vote-front**: The external service, which includes a linked external IP address so you can view the application in your browser.
:::image type="content" source="media/kubernetes-portal/portal-services.png" alt-text="Azure Vote application information displayed in the Azure portal." lightbox="media/kubernetes-portal/portal-services.png"::: ### Monitor deployment insights
-AKS clusters with [Container insights][enable-monitor] enabled can quickly view deployment and other insights. From the Kubernetes resources view, users can see the live status of individual deployments, including CPU and memory usage, as well as transition to Azure monitor for more in-depth information about specific nodes and containers. Here's an example of deployment insights from a sample AKS cluster:
+AKS clusters with [Container insights][enable-monitor] enabled can quickly view deployment and other insights. From the Kubernetes resources view, you can see the live status of individual deployments, including CPU and memory usage. You can also go to Azure Monitor for more in-depth information about specific nodes and containers.
+
+Here's an example of deployment insights from a sample AKS cluster:
:::image type="content" source="media/kubernetes-portal/deployment-insights.png" alt-text="Deployment insights displayed in the Azure portal." lightbox="media/kubernetes-portal/deployment-insights.png":::
The Kubernetes resource view also includes a YAML editor. A built-in YAML editor
:::image type="content" source="media/kubernetes-portal/service-editor.png" alt-text="YAML editor for a Kubernetes service displayed in the Azure portal.":::
-After editing the YAML, changes are applied by selecting **Review + save**, confirming the changes, and then saving again.
+To edit a YAML file for one of your resources, see the following steps:
+
+1. Navigate to your resource in the Azure portal.
+2. Select **YAML** and make your desired edits.
+3. Select **Review + save** > **Confirm manifest changes** > **Save**.
>[!WARNING]
-> Performing direct production changes via UI or CLI is not recommended, you should leverage [continuous integration (CI) and continuous deployment (CD) best practices](kubernetes-action.md). The Azure Portal Kubernetes management capabilities and the YAML editor are built for learning and flighting new deployments in a development and testing setting.
+> We don't recommend performing direct production changes via UI or CLI. Instead, you should leverage [continuous integration (CI) and continuous deployment (CD) best practices](kubernetes-action.md). The Azure portal Kubernetes management capabilities, such as the YAML editor, are built for learning and flighting new deployments in a development and testing setting.
## Troubleshooting
This section addresses common problems and troubleshooting steps.
To access the Kubernetes resources, you must have access to the AKS cluster, the Kubernetes API, and the Kubernetes objects. Ensure that you're either a cluster administrator or a user with the appropriate permissions to access the AKS cluster. For more information on cluster security, see [Access and identity options for AKS][concepts-identity]. >[!NOTE]
-> The kubernetes resource view in the Azure Portal is only supported by [managed-AAD enabled clusters](managed-aad.md) or non-AAD enabled clusters. If you are using a managed-AAD enabled cluster, your AAD user or identity needs to have the respective roles/role bindings to access the kubernetes API, in addition to the permission to pull the [user `kubeconfig`](control-kubeconfig-access.md).
+> The Kubernetes resource view in the Azure portal is only supported by [managed-AAD enabled clusters](managed-aad.md) or non-AAD enabled clusters. If you're using a managed-AAD enabled cluster, your AAD user or identity needs to have the respective roles/role bindings to access the Kubernetes API and the permission to pull the [user `kubeconfig`](control-kubeconfig-access.md).
### Enable resource view
For existing clusters, you may need to enable the Kubernetes resource view. To e
### [Azure CLI](#tab/azure-cli) > [!TIP]
-> The AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. Another option for such clusters is updating `--api-server-authorized-ip-ranges` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with below command or by searching "what is my IP address" in an internet browser.
+> You can add the AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) to limit API server access to only the firewall's public endpoint. Another option is to update the `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with the following command or you can search "what is my IP address" in your browser.
```bash # Retrieve your IP address
az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/3
### [Azure PowerShell](#tab/azure-powershell) > [!TIP]
-> The AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. Another option for such clusters is updating `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with below command or by searching "what is my IP address" in an internet browser.
+> You can add the AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) to limit API server access to only the firewall's public endpoint. Another option is to update the `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with the following command or you can search "what is my IP address" in your browser.
```azurepowershell # Retrieve your IP address
Set-AzAksCluster -ResourceGroupName $RG -Name $AKSNAME -ApiServerAccessAuthorize
## Next steps
-This article showed you how to access Kubernetes resources for your AKS cluster. See [Deployments and YAML manifests][deployments] for a deeper understanding of cluster resources and the YAML files that are accessed with the Kubernetes resource viewer.
+This article showed you how to access Kubernetes resources from the Azure portal. For more information on cluster resources, see [Deployments and YAML manifests][deployments].
<!-- LINKS - internal --> [concepts-identity]: concepts-identity.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
[deployments]: concepts-clusters-workloads.md#deployments-and-yaml-manifests [aks-managed-aad]: managed-aad.md [cli-aad-upgrade]: managed-aad.md#upgrade-to-aks-managed-azure-ad-integration
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+
1. Copy in the following YAML definition: ```yaml
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Two [Kubernetes Services][kubernetes-service] are also created:
1. Create a file named `azure-vote.yaml` and copy in the following manifest.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system.
-
+
```yaml apiVersion: apps/v1 kind: Deployment
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
Two Kubernetes Services are also created:
* An internal service for the Redis instance. * An external service to access the Azure Vote application from the internet.
-1. In the Cloud Shell, use an editor to create a file named `azure-vote.yaml`, such as:
- * `code azure-vote.yaml`
- * `nano azure-vote.yaml`, or
- * `vi azure-vote.yaml`.
-
-1. Copy in the following YAML definition:
+1. In the Cloud Shell, open an editor and create a file named `azure-vote.yaml`.
+2. Paste in the following YAML definition:
```yaml apiVersion: apps/v1
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+
1. Copy in the following YAML definition: ```yaml
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+
1. Copy in the following YAML definition: ```yaml
aks Stop Api Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-api-upgrade.md
- Title: Stop cluster upgrades on API breaking changes in Azure Kubernetes Service (AKS) (preview)
-description: Learn how to stop minor version change Azure Kubernetes Service (AKS) cluster upgrades on API breaking changes.
--- Previously updated : 03/24/2023--
-# Stop cluster upgrades on API breaking changes in Azure Kubernetes Service (AKS)
-
-To stay within a supported Kubernetes version, you usually have to upgrade your version at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes and deprecations and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
-
-Azure Kubernetes Service (AKS) now supports fail fast on minor version change cluster upgrades. This feature alerts you with an error message if it detects usage on deprecated APIs in the goal version.
--
-## Fail fast on control plane minor version manual upgrades in AKS (preview)
-
-AKS will fail fast on minor version change cluster manual upgrades if it detects usage on deprecated APIs in the goal version. This will only happen if the following criteria are true:
--- It's a minor version change for the cluster control plane.-- Your Kubernetes goal version is >= 1.26.0.-- The PUT MC request uses a preview API version of >= 2023-01-02-preview.-- The usage is performed within the last 1-12 hours. We record usage hourly, so usage within the last hour isn't guaranteed to appear in the detection.-
-If the previous criteria are true and you attempt an upgrade, you'll receive an error message similar to the following example error message:
-
-```
-Bad Request({
-
- "code": "ValidationError",
-
- "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set IgnoreKubernetesDeprecations in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n",
-
- "subcode": "UpgradeBlockedOnDeprecatedAPIUsage"
-
-})
-```
-
-After receiving the error message, you have two options:
--- Remove usage on your end and wait 12 hours for the current record to expire.-- Bypass the validation to ignore API changes.-
-### Remove usage on API breaking changes
-
-Remove usage on API breaking changes using the following steps:
-
-1. Remove the deprecated API, which is listed in the error message.
-2. Wait 12 hours for the current record to expire.
-3. Retry your cluster upgrade.
-
-### Bypass validation to ignore API changes
-
-To bypass validation to ignore API breaking changes, update the `"properties":` block of `Microsoft.ContainerService/ManagedClusters` `PUT` operation with the following settings:
-
-> [!NOTE]
-> The date and time you specify for `"until"` has to be in the future. `Z` stands for timezone. The following example is in GMT. For more information, see [Combined date and time representations](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
-
-```
-{
- "properties": {
- "upgradeSettings": {
- "overrideSettings": {
- "controlPlaneOverrides": [
- "IgnoreKubernetesDeprecations"
- ],
- "until": "2023-04-01T13:00:00Z"
- }
- }
- }
-}
-```
-
-## Next steps
-
-In this article, you learned how AKS detects deprecated APIs before an update is triggered and fails the upgrade operation upfront. To learn more about AKS cluster upgrades, see:
--- [Upgrade an AKS cluster][upgrade-cluster]-- [Use Planned Maintenance to schedule and control upgrades for your AKS clusters (preview)][planned-maintenance-aks]-
-<!-- INTERNAL LINKS -->
-[upgrade-cluster]: upgrade-cluster.md
-[planned-maintenance-aks]: planned-maintenance.md
api-management Authentication Basic Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-basic-policy.md
Use the `authentication-basic` policy to authenticate with a backend service usi
| Attribute | Description | Required | Default | | -- | | -- | - |
-|username|Specifies the username of the Basic credential.|Yes|N/A|
-|password|Specifies the password of the Basic credential.|Yes|N/A|
+|username|Specifies the username of the Basic credential. Policy expressions are allowed. |Yes|N/A|
+|password|Specifies the password of the Basic credential. Policy expressions are allowed. |Yes|N/A|
## Usage
api-management Authentication Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-certificate-policy.md
| Attribute | Description | Required | Default | | -- | | -- | - |
-|thumbprint|The thumbprint for the client certificate.|Either `thumbprint` or `certificate-id` can be present.|N/A|
-|certificate-id|The certificate resource name.|Either `thumbprint` or `certificate-id` can be present.|N/A|
-|body|Client certificate as a byte array. Use if the certificate isn't retrieved from the built-in certificate store.|No|N/A|
-|password|Password for the client certificate.|Use if certificate specified in `body` is password protected.|N/A|
+|thumbprint|The thumbprint for the client certificate. Policy expressions are allowed. |Either `thumbprint` or `certificate-id` can be present.|N/A|
+|certificate-id|The certificate resource name. Policy expressions are allowed.|Either `thumbprint` or `certificate-id` can be present.|N/A|
+|body|Client certificate as a byte array. Use if the certificate isn't retrieved from the built-in certificate store. Policy expressions are allowed.|No|N/A|
+|password|Password for the client certificate. Policy expressions are allowed.|Use if certificate specified in `body` is password protected.|N/A|
## Usage
api-management Authentication Managed Identity Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-managed-identity-policy.md
Both system-assigned identity and any of the multiple user-assigned identities c
| Attribute | Description | Required | Default | | -- | | -- | - | |resource|String. The application ID of the target web API (secured resource) in Azure Active Directory. Policy expressions are allowed. |Yes|N/A|
-|client-id|String. The client ID of the user-assigned identity in Azure Active Directory. Policy expressions are not allowed. |No|system-assigned identity|
-|output-token-variable-name|String. Name of the context variable that will receive token value as an object of type `string`. Policy expresssions are not allowed. |No|N/A|
-|ignore-error|Boolean. If set to `true`, the policy pipeline will continue to execute even if an access token is not obtained.|No|`false`|
+|client-id|String. The client ID of the user-assigned identity in Azure Active Directory. Policy expressions aren't allowed. |No|system-assigned identity|
+|output-token-variable-name|String. Name of the context variable that will receive token value as an object of type `string`. Policy expressions aren't allowed. |No|N/A|
+|ignore-error|Boolean. If set to `true`, the policy pipeline continues to execute even if an access token isn't obtained.|No|`false`|
## Usage
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
| Attribute | Description | Required | Default | | -- | | -- | - |
-| allow-private-response-caching | When set to `true`, allows caching of requests that contain an Authorization header. | No | `false` |
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| downstream-caching-type | This attribute must be set to one of the following values.<br /><br /> - none - downstream caching is not allowed.<br />- private - downstream private caching is allowed.<br />- public - private and shared downstream caching is allowed. | No | none |
-| must-revalidate | When downstream caching is enabled this attribute turns on or off the `must-revalidate` cache control directive in gateway responses. | No | `true` |
-| vary-by-developer | Set to `true` to cache responses per developer account that owns [subscription key](./api-management-subscriptions.md) included in the request. | Yes | `false` |
-| vary-by-developer-groups | Set to `true` to cache responses per [user group](./api-management-howto-create-groups.md). | Yes | `false` |
+| allow-private-response-caching | When set to `true`, allows caching of requests that contain an Authorization header. Policy expressions are allowed. | No | `false` |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise.<br/><br/>Policy expressions aren't allowed. | No | `prefer-external` |
+| downstream-caching-type | This attribute must be set to one of the following values.<br /><br /> - none - downstream caching is not allowed.<br />- private - downstream private caching is allowed.<br />- public - private and shared downstream caching is allowed.<br/><br/>Policy expressions are allowed. | No | none |
+| must-revalidate | When downstream caching is enabled this attribute turns on or off the `must-revalidate` cache control directive in gateway responses. Policy expressions are allowed. | No | `true` |
+| vary-by-developer | Set to `true` to cache responses per developer account that owns [subscription key](./api-management-subscriptions.md) included in the request. Policy expressions are allowed. | Yes | `false` |
+| vary-by-developer-groups | Set to `true` to cache responses per [user group](./api-management-howto-create-groups.md). Policy expressions are allowed. | Yes | `false` |
## Elements
api-management Cache Lookup Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-value-policy.md
Use the `cache-lookup-value` policy to perform cache lookup by key and return a
| Attribute | Description | Required | Default | ||--|--|--|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| default-value | A value that will be assigned to the variable if the cache key lookup resulted in a miss. If this attribute is not specified, `null` is assigned. | No | `null` |
-| key | Cache key value to use in the lookup. | Yes | N/A |
-| variable-name | Name of the [context variable](api-management-policy-expressions.md#ContextVariables) the looked up value will be assigned to, if lookup is successful. If lookup results in a miss, the variable will not be set. | Yes | N/A |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise.<br/><br/>Policy expressions aren't allowed. | No | `prefer-external` |
+| default-value | A value that will be assigned to the variable if the cache key lookup resulted in a miss. If this attribute is not specified, `null` is assigned. Policy expressions are allowed. | No | `null` |
+| key | Cache key value to use in the lookup. Policy expressions are allowed. | Yes | N/A |
+| variable-name | Name of the [context variable](api-management-policy-expressions.md#ContextVariables) the looked up value will be assigned to, if lookup is successful. If lookup results in a miss, the variable will not be set. Policy expressions aren't allowed. | Yes | N/A |
## Usage
api-management Cache Remove Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-remove-value-policy.md
The `cache-remove-value` deletes a cached item identified by its key. The key ca
| Attribute | Description | Required | Default | ||--|--|--|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| key | The key of the previously cached value to be removed from the cache. | Yes | N/A |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. <br/><br/>Policy expressions aren't allowed. | No | `prefer-external` |
+| key | The key of the previously cached value to be removed from the cache. Policy expressions are allowed. | Yes | N/A |
## Usage
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
The `cache-store` policy caches responses according to the specified cache setti
| Attribute | Description | Required | Default | | -- | | -- | - |
-| duration | Time-to-live of the cached entries, specified in seconds. | Yes | N/A |
-| cache-response | Set to `true` to cache the current HTTP response. If the attribute is omitted or set to `false`, only HTTP responses with the status code `200 OK` are cached. | No | `false` |
+| duration | Time-to-live of the cached entries, specified in seconds. Policy expressions are allowed. | Yes | N/A |
+| cache-response | Set to `true` to cache the current HTTP response. If the attribute is omitted or set to `false`, only HTTP responses with the status code `200 OK` are cached. Policy expressions are allowed. | No | `false` |
## Usage
api-management Cache Store Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-value-policy.md
The `cache-store-value` performs cache storage by key. The key can have an arbit
| Attribute | Description | Required | Default | ||--|--|--|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| duration | Value will be cached for the provided duration value, specified in seconds. | Yes | N/A |
-| key | Cache key the value will be stored under. | Yes | N/A |
-| value | The value to be cached. | Yes | N/A |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise.<br/><br/>Policy expressions aren't allowed.| No | `prefer-external` |
+| duration | Value will be cached for the provided duration value, specified in seconds. Policy expressions are allowed. | Yes | N/A |
+| key | Cache key the value will be stored under. Policy expressions are allowed. | Yes | N/A |
+| value | The value to be cached. Policy expressions are allowed. | Yes | N/A |
## Usage
api-management Check Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/check-header-policy.md
Use the `check-header` policy to enforce that a request has a specified HTTP he
| Attribute | Description | Required | Default | | -- | - | -- | - |
-| name | The name of the HTTP header to check. | Yes | N/A |
-| failed-check-httpcode | HTTP status code to return if the header doesn't exist or has an invalid value. | Yes | N/A |
-| failed-check-error-message | Error message to return in the HTTP response body if the header doesn't exist or has an invalid value. This message must have any special characters properly escaped. | Yes | N/A |
-| ignore-case | Boolean. If set to `true`, case is ignored when the header value is compared against the set of acceptable values. | Yes | N/A |
+| name | The name of the HTTP header to check. Policy expressions are allowed. | Yes | N/A |
+| failed-check-httpcode | HTTP status code to return if the header doesn't exist or has an invalid value. Policy expressions are allowed. | Yes | N/A |
+| failed-check-error-message | Error message to return in the HTTP response body if the header doesn't exist or has an invalid value. This message must have any special characters properly escaped. Policy expressions are allowed. | Yes | N/A |
+| ignore-case | Boolean. If set to `true`, case is ignored when the header value is compared against the set of acceptable values. Policy expressions are allowed. | Yes | N/A |
## Elements
Use the `check-header` policy to enforce that a request has a specified HTTP he
| value | Add one or more of these elements to specify allowed HTTP header values. When multiple `value` elements are specified, the check is considered a success if any one of the values is a match. | No | +++ ## Usage -- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- **[Policy sections:](./api-management-howto-policies.md#sections)** inbound
+- **[Policy scopes:](./api-management-howto-policies.md#scopes)** global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
Use the `check-header` policy to enforce that a request has a specified HTTP he
* [API Management access restriction policies](api-management-access-restriction-policies.md)
api-management Cors Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cors-policy.md
The `cors` policy adds cross-origin resource sharing (CORS) support to an operat
|Name|Description|Required|Default| |-|--|--|-|
-|allow-credentials|The `Access-Control-Allow-Credentials` header in the preflight response will be set to the value of this attribute and affect the client's ability to submit credentials in cross-domain requests.|No|`false`|
-|terminate-unmatched-request|Controls the processing of cross-origin requests that don't match the policy settings.<br/><br/>When `OPTIONS` request is processed as a preflight request and `Origin` header doesn't match policy settings:<br/> - If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response<br/>- If the attribute is set to `false`, check inbound for other in-scope `cors` policies that are direct children of the inbound element and apply them. If no `cors` policies are found, terminate the request with an empty `200 OK` response. <br/><br/>When `GET` or `HEAD` request includes the `Origin` header (and therefore is processed as a simple cross-origin request), and doesn't match policy settings:<br/>- If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response.<br/> - If the attribute is set to `false`, allow the request to proceed normally and don't add CORS headers to the response.|No|`true`|
+|allow-credentials|The `Access-Control-Allow-Credentials` header in the preflight response will be set to the value of this attribute and affect the client's ability to submit credentials in cross-domain requests. Policy expressions are allowed.|No|`false`|
+|terminate-unmatched-request|Controls the processing of cross-origin requests that don't match the policy settings. Policy expressions are allowed.<br/><br/>When `OPTIONS` request is processed as a preflight request and `Origin` header doesn't match policy settings:<br/> - If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response<br/>- If the attribute is set to `false`, check inbound for other in-scope `cors` policies that are direct children of the inbound element and apply them. If no `cors` policies are found, terminate the request with an empty `200 OK` response. <br/><br/>When `GET` or `HEAD` request includes the `Origin` header (and therefore is processed as a simple cross-origin request), and doesn't match policy settings:<br/>- If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response.<br/> - If the attribute is set to `false`, allow the request to proceed normally and don't add CORS headers to the response.|No|`true`|
## Elements |Name|Description|Required|Default| |-|--|--|-| |allowed-origins|Contains `origin` elements that describe the allowed origins for cross-domain requests. `allowed-origins` can contain either a single `origin` element that specifies `*` to allow any origin, or one or more `origin` elements that contain a URI.|Yes|N/A|
-|origin|The value can be either `*` to allow all origins, or a URI that specifies a single origin. The URI must include a scheme, host, and port.|Yes|If the port is omitted in a URI, port 80 is used for HTTP and port 443 is used for HTTPS.|
|allowed-methods|This element is required if methods other than `GET` or `POST` are allowed. Contains `method` elements that specify the supported HTTP verbs. The value `*` indicates all methods.|No|If this section isn't present, `GET` and `POST` are supported.|
-|method|Specifies an HTTP verb.|At least one `method` element is required if the `allowed-methods` section is present.|N/A|
|allowed-headers|This element contains `header` elements specifying names of the headers that can be included in the request.|Yes|N/A| |expose-headers|This element contains `header` elements specifying names of the headers that will be accessible by the client.|No|N/A|
-|header|Specifies a header name.|At least one `header` element is required in `allowed-headers` or in `expose-headers` if that section is present.|N/A|
> [!CAUTION] > Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+### allowed-origins elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|origin|The value can be either `*` to allow all origins, or a URI that specifies a single origin. The URI must include a scheme, host, and port.|Yes|If the port is omitted in a URI, port 80 is used for HTTP and port 443 is used for HTTPS.|
++ ### allowed-methods attributes |Name|Description|Required|Default| |-|--|--|-|
-|preflight-result-max-age|The `Access-Control-Max-Age` header in the preflight response will be set to the value of this attribute and affect the user agent's ability to cache the preflight response.|No|0|
+|preflight-result-max-age|The `Access-Control-Max-Age` header in the preflight response will be set to the value of this attribute and affect the user agent's ability to cache the preflight response. Policy expressions are allowed.|No|0|
+
+### allowed-methods elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|method|Specifies an HTTP verb. Policy expressions are allowed.|At least one `method` element is required if the `allowed-methods` section is present.|N/A|
+
+### allowed-headers elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|header|Specifies a header name.|At least one `header` element is required in `allowed-headers` if that section is present.|N/A|
+
+### expose-headers elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|header|Specifies a header name.|At least one `header` element is required in `expose-headers` if that section is present.|N/A|
+ ## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
api-management Emit Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/emit-metric-policy.md
The `emit-metric` policy sends custom metrics in the specified format to Applica
| Attribute | Description | Required | Default value | | | -- | | -- |
-| name | A string or policy expression. Name of custom metric. | Yes | N/A |
-| namespace | A string or policy expression. Namespace of custom metric. | No | API Management |
-| value | An integer or policy expression. Value of custom metric. | No | 1 |
+| name | A string. Name of custom metric. Policy expressions aren't allowed. | Yes | N/A |
+| namespace | A string. Namespace of custom metric. Policy expressions aren't allowed. | No | API Management |
+| value | Value of custom metric expressed as an integer. Policy expressions are allowed. | No | 1 |
## Elements
api-management Enable Cors Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/enable-cors-power-platform.md
+
+ Title: Enable CORS policies to test Azure API Management custom connector
+description: How to enable CORS policies in Azure API Management and Power Platform to test a custom connector from Power Platform applications.
+++++ Last updated : 03/24/2023+++
+# Enable CORS policies to test custom connector from Power Platform
+Cross-origin resource sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources. Customers can add a [CORS policy](cors-policy.md) to their web APIs in Azure API Management, which adds cross-origin resource sharing support to an operation or an API to allow cross-domain calls from browser-based clients.
+
+If you've exported an API from API Management as a [custom connector](export-api-power-platform.md) in the Power Platform and want to use the Power Apps or Power Automate test console to call the API, you need to configure your API to explicitly enable cross-origin requests from Power Platform applications. This article shows you how to configure the following two necessary policy settings:
+
+* Add a CORS policy to your API
+
+* Add a policy to your custom connector that sets an Origin header on HTTP requests
+
+## Prerequisites
+++ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)++ Export an API from your API Management instance to a Power Platform environment as a [custom connector](export-api-power-platform.md)+
+## Add CORS policy to API in API Management
+
+Follow these steps to configure the CORS policy in API Management.
+
+1. Sign into [Azure portal](https://portal.azure.com) and go to your API Management instance.
+1. In the left menu, select **APIs** and select the API that you exported as a custom connector. If you want to, select only an API operation to apply the policy to.
+1. In the **Policies** section, in the **Inbound processing** section, select **+ Add policy**.
+ 1. Select **Allow cross-origin resource sharing (CORS)**.
+ 1. Add the following **Allowed origin**: `https://make.powerapps.com`.
+ 1. Select **Save**.
+
+* For more information about configuring a policy, see [Set or edit policies](set-edit-policies.md).
+* For details about the CORS policy, see the [cors](cors-policy.md) policy reference.
+
+> [!NOTE]
+> If you already have an existing CORS policy at the service (all APIs) level to enable the test console of the developer portal, you can add the `https://make.powerapps.com` origin to that policy instead of configuring a separate policy for the API or operation.
+
+> [!NOTE]
+> Depending on how the custom connector gets used in Power Platform applications, you might need to configure additional origins in the CORS policy. If you experience CORS problems when running Power Platform applications, use developer tools in your browser, tracing in API Management, or Application Insights to investigate the issues.
++
+## Add policy to custom connector to set Origin header
+
+Add the following policy to your custom connector in your Power Platform environment. The policy sets an Origin header to match the CORS origin you allowed in API Management.
+
+For details about editing settings of a custom connector, see [Create a custom connector from scratch](/connectors/custom-connectors/define-blank).
+
+1. Sign in to Power Apps or Power Automate.
+1. On the left pane, select **Data** > **Custom Connectors**.
+1. Select your connector from the list of custom connectors.
+1. Select the pencil (Edit) icon to edit the custom connector.
+1. Select **3. Definition**.
+1. In **Policies**, select **+ New policy**. Select or enter the following policy details.
+
+
+ |Setting |Value |
+ |||
+ |Name | A name of your choice, such as **set-origin-header** |
+ |Template | **Set HTTP header** |
+ |Header name | **Origin** |
+ |Header value | `https://make.powerapps.com` (same URL that you configured in API Management) |
+ |Action if header exists | **override** |
+ |Run policy on | **Request** |
+
+ :::image type="content" source="media/enable-cors-power-platform/cors-policy-power-platform.png" alt-text="Screenshot of creating policy in Power Platform custom connector to set an Origin header in HTTP requests.":::
+
+1. Select **Update connector**.
+
+1. After setting the policy, go to the **5. Test** page to test the custom connector.
+
+## Next steps
+
+* [Learn more about the Power Platform](https://powerplatform.microsoft.com/)
+* [Learn more about creating and using custom connectors](/connectors/custom-connectors/)
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-power-platform.md
Previously updated : 08/12/2022 Last updated : 03/24/2023 + # Export APIs from Azure API Management to the Power Platform
This article walks through the steps in the Azure portal to create a custom Powe
## Prerequisites + Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
-+ Make sure there is an API in your API Management instance that you'd like to export to the Power Platform
++ Make sure there's an API in your API Management instance that you'd like to export to the Power Platform + Make sure you have a Power Apps or Power Automate [environment](/powerapps/powerapps-overview#power-apps-for-admins) ## Create a custom connector to an API
This article walks through the steps in the Azure portal to create a custom Powe
:::image type="content" source="media/export-api-power-platform/create-custom-connector.png" alt-text="Create custom connector to API in API Management":::
-Once the connector is created, navigate to your [Power Apps](https://make.powerapps.com) or [Power Automate](https://make.powerautomate.com) environment. You will see the API listed under **Data > Custom Connectors**.
+Once the connector is created, navigate to your [Power Apps](https://make.powerapps.com) or [Power Automate](https://make.powerautomate.com) environment. You'll see the API listed under **Data > Custom Connectors**.
:::image type="content" source="media/export-api-power-platform/custom-connector-power-app.png" alt-text="Custom connector in Power Platform":::
You can manage your custom connector in your Power Apps or Power Platform enviro
1. Select your connector from the list of custom connectors. 1. Select the pencil (Edit) icon to edit and test the custom connector.
-> [!NOTE]
-> To call the API from the Power Apps test console, you need to add the `https://make.powerautomate.com` URL as an origin to the [CORS policy](cors-policy.md) in your API Management instance.
+> [!IMPORTANT]
+> To call the API from the Power Apps test console, you need to configure a CORS policy in your API Management instance and create a policy in the custom connector to set an Origin header in HTTP requests. For more information, see [Enable CORS policies to test custom connector from Power Platform](enable-cors-power-platform.md).
>
-> Depending on how the custom connector gets used when running Power Apps, you might need to configure additional origins in the CORS policy. You can use developer tools in your browser, tracing in API Management, or Application Insights to investigate CORS issues when running Power Apps.
## Update a custom connector
api-management Find And Replace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/find-and-replace-policy.md
The `find-and-replace` policy finds a request or response substring and replaces
| Attribute | Description | Required | Default | | -- | | -- | - |
-|from|The string to search for.|Yes|N/A|
-|to|The replacement string. Specify a zero length replacement string to remove the search string.|Yes|N/A|
+|from|The string to search for. Policy expressions are allowed. |Yes|N/A|
+|to|The replacement string. Specify a zero length replacement string to remove the search string. Policy expressions are allowed. |Yes|N/A|
## Usage
api-management Forward Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md
The `forward-request` policy forwards the incoming request to the backend servic
| Attribute | Description | Required | Default | | | -- | -- | - |
-| timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. | No | 300 |
-| follow-redirects | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. | No | `false` |
+| timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. Policy expressions are allowed. | No | 300 |
+| follow-redirects | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. Policy expressions are allowed. | No | `false` |
| buffer-request-body | When set to `true`, request is buffered and will be reused on [retry](retry-policy.md). | No | `false` |
-| buffer-response | Affects processing of chunked responses. When set to `false`, each chunk received from the backend is immediately returned to the caller. When set to `true`, chunks are buffered (8 KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to `false` with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. | No | `true` |
-| fail-on-error-status-code | When set to `true`, triggers [on-error](api-management-error-handling-policies.md) section for response codes in the range from 400 to 599 inclusive. | No | `false` |
+| buffer-response | Affects processing of chunked responses. When set to `false`, each chunk received from the backend is immediately returned to the caller. When set to `true`, chunks are buffered (8 KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to `false` with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. Policy expressions aren't allowed. | No | `true` |
+| fail-on-error-status-code | When set to `true`, triggers [on-error](api-management-error-handling-policies.md) section for response codes in the range from 400 to 599 inclusive. Policy expressions aren't allowed. | No | `false` |
## Usage
api-management Include Fragment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/include-fragment-policy.md
The policy inserts the policy fragment as-is at the location you select in the p
| Attribute | Description | Required | Default | | | -- | -- | - |
-| fragment-id | A string. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
+| fragment-id | A string. Specifies the identifier (name) of a policy fragment created in the API Management instance. Policy expressions aren't allowed. | Yes | N/A |
## Usage
api-management Invoke Dapr Binding Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/invoke-dapr-binding-policy.md
The policy assumes that Dapr runtime is running in a sidecar container in the sa
| Attribute | Description | Required | Default | |||-||
-| name | Target binding name. Must match the name of the bindings [defined](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#bindings-structure) in Dapr. | Yes | N/A |
-| operation | Target operation name (binding specific). Maps to the [operation](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. | No | None |
-| ignore-error | If set to `true` instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. | No | `false` |
-| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. | No | None |
-| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. | No | 5 |
-| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None |
+| name | Target binding name. Must match the name of the bindings [defined](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#bindings-structure) in Dapr. Policy expressions are allowed. | Yes | N/A |
+| operation | Target operation name (binding specific). Maps to the [operation](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. Policy expressions aren't allowed. | No | None |
+| ignore-error | If set to `true` instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. Policy expressions aren't allowed. | No | `false` |
+| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. Policy expressions aren't allowed. | No | None |
+| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. Policy expressions are allowed.| No | 5 |
+| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None |
| content-type | Type of the message content. "application/json" is the only supported value. | No | None |
+## Elements
+
+| Element | Description | Required |
+||--|-|
+| metadata | Binding specific metadata in the form of key/value pairs. Maps to the [metadata](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. | No |
+| data | Content of the message. Maps to the [data](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. Policy expressions are allowed. | No |
+ ## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error
api-management Ip Filter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/ip-filter-policy.md
The `ip-filter` policy filters (allows/denies) calls from specific IP addresses
| Attribute | Description | Required | Default | | -- | - | -- | - |
-| address-range from="address" to="address" | A range of IP addresses to allow or deny access for. | Required when the `address-range` element is used. | N/A |
-| action | Specifies whether calls should be allowed (`allow`) or not (`forbid`) for the specified IP addresses and ranges. | Yes | N/A |
+| action | Specifies whether calls should be allowed (`allow`) or not (`forbid`) for the specified IP addresses and ranges. Policy expressions are allowed. | Yes | N/A |
## Elements | Element | Description | Required | | -- | | -- |
-| address | Add one or more of these elements to specify a single IP address on which to filter. | At least one `address` or `address-range` element is required. |
+| address | Add one or more of these elements to specify a single IP address on which to filter. Policy expressions are allowed. | At least one `address` or `address-range` element is required. |
| address-range | Add one or more of these elements to specify a range of IP addresses `from` "address" `to` "address" on which to filter. | At least one `address` or `address-range` element is required. |
api-management Json To Xml Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/json-to-xml-policy.md
The `json-to-xml` policy converts a request or response body from JSON to XML.
| Attribute | Description | Required | Default | | -- | | -- | - |
-|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - always apply conversion.<br />- `content-type-json` - convert only if response Content-Type header indicates presence of JSON.|Yes|N/A|
-|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if XML is requested in request Accept header.<br />- `false` - always apply conversion.|No|`true`|
-|parse-date|When set to `false` date values are simply copied during transformation.|No|`true`|
-|namespace-separator|The character to use as a namespace separator.|No|Underscore|
-|namespace-prefix|The string that identifies property as namespace attribute, usually "xmlns". Properties with names beginning with specified prefix will be added to current element as namespace declarations.|No|N/A|
-|attribute-block-name|When set, properties inside the named object will be added to the element as attributes|No|Not set|
+|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - always apply conversion.<br />- `content-type-json` - convert only if response Content-Type header indicates presence of JSON.<br/><br/>Policy expressions are allowed.|Yes|N/A|
+|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if XML is requested in request Accept header.<br />- `false` - always apply conversion.<br/><br/>Policy expressions are allowed.|No|`true`|
+|parse-date|When set to `false` date values are simply copied during transformation. Policy expressions aren't allowed.|No|`true`|
+|namespace-separator|The character to use as a namespace separator. Policy expressions are allowed.|No|Underscore|
+|namespace-prefix|The string that identifies property as namespace attribute, usually "xmlns". Properties with names beginning with specified prefix will be added to current element as namespace declarations. Policy expressions are allowed.|No|N/A|
+|attribute-block-name|When set, properties inside the named object will be added to the element as attributes. Policy expressions are allowed.|No|Not set|
## Usage
api-management Jsonp Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/jsonp-policy.md
The `jsonp` policy adds JSON with padding (JSONP) support to an operation or an
|Name|Description|Required|Default| |-|--|--|-|
-|callback-parameter-name|The cross-domain JavaScript function call prefixed with the fully qualified domain name where the function resides.|Yes|N/A|
+|callback-parameter-name|The cross-domain JavaScript function call prefixed with the fully qualified domain name where the function resides. Policy expressions are allowed.|Yes|N/A|
## Usage
api-management Limit Concurrency Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/limit-concurrency-policy.md
The `limit-concurrency` policy prevents enclosed policies from executing by more
| Attribute | Description | Required | Default | | | -- | -- | - |
-| key | A string. Policy expression allowed. Specifies the concurrency scope. Can be shared by multiple policies. | Yes | N/A |
-| max-count | An integer. Specifies a maximum number of requests that are allowed to enter the policy. | Yes | N/A |
+| key | A string. Specifies the concurrency scope. Can be shared by multiple policies. Policy expressions are allowed. | Yes | N/A |
+| max-count | An integer. Specifies a maximum number of requests that are allowed to enter the policy. Policy expressions aren't allowed. | Yes | N/A |
## Usage
api-management Log To Eventhub Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/log-to-eventhub-policy.md
The `log-to-eventhub` policy sends messages in the specified format to an event
| Attribute | Description | Required | Default | | - | - | -- | -|
-| logger-id | The ID of the Logger registered with your API Management service. | Yes | N/A |
-| partition-id | Specifies the index of the partition where messages are sent. | Optional. Do not use if `partition-key` is used. | N/A |
-| partition-key | Specifies the value used for partition assignment when messages are sent. | Optional. Do not use if `partition-id` is used. | N/A |
+| logger-id | The ID of the Logger registered with your API Management service. Policy expressions aren't allowed. | Yes | N/A |
+| partition-id | Specifies the index of the partition where messages are sent. Policy expressions aren't allowed. | Optional. Do not use if `partition-key` is used. | N/A |
+| partition-key | Specifies the value used for partition assignment when messages are sent. Policy expressions are allowed. | Optional. Do not use if `partition-id` is used. | N/A |
## Usage
api-management Mock Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-response-policy.md
# Mock response
-The `mock-response` policy, as the name implies, is used to mock APIs and operations. It cancels normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, when available. It generates sample responses from schemas, when schemas are provided and examples are not. If neither examples or schemas are found, responses with no content are returned.
+The `mock-response` policy, as the name implies, is used to mock APIs and operations. It cancels normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, when available. It generates sample responses from schemas, when schemas are provided and examples aren't. If neither examples or schemas are found, responses with no content are returned.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `mock-response` policy, as the name implies, is used to mock APIs and operat
| Attribute | Description | Required | Default | | | -- | -- | - |
-| status-code | Specifies response status code and is used to select corresponding example or schema. | No | 200 |
-| content-type | Specifies `Content-Type` response header value and is used to select corresponding example or schema. | No | None |
+| status-code | Specifies response status code and is used to select corresponding example or schema. Policy expressions aren't allowed. | No | 200 |
+| content-type | Specifies `Content-Type` response header value and is used to select corresponding example or schema. Policy expressions aren't allowed. | No | None |
## Usage
api-management Proxy Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/proxy-policy.md
The `proxy` policy allows you to route requests forwarded to backends via an HTT
| Attribute | Description | Required | Default | | -- | | -- | - |
-| url | Proxy URL in the form of `http://host:port`. | Yes | N/A |
-| username | Username to be used for authentication with the proxy. | No | N/A |
-| password | Password to be used for authentication with the proxy. | No | N/A |
+| url | Proxy URL in the form of `http://host:port`. Policy expressions are allowed. | Yes | N/A |
+| username | Username to be used for authentication with the proxy. Policy expressions are allowed. | No | N/A |
+| password | Password to be used for authentication with the proxy. Policy expressions are allowed. | No | N/A |
## Usage
api-management Publish To Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-to-dapr-policy.md
The policy assumes that Dapr runtime is running in a sidecar container in the sa
| Attribute | Description | Required | Default | |||-||
-| pubsub-name | The name of the target PubSub component. Maps to the [pubsubname](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. If not present, the `topic` attribute value must be in the form of `pubsub-name/topic-name`. | No | None |
-| topic | The name of the topic. Maps to the [topic](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. | Yes | N/A |
-| ignore-error | If set to `true`, instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. | No | `false` |
-| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. | No | None |
-| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. | No | 5 |
+| pubsub-name | The name of the target PubSub component. Maps to the [pubsubname](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. If not present, the `topic` attribute value must be in the form of `pubsub-name/topic-name`. Policy expressions are allowed. | No | None |
+| topic | The name of the topic. Maps to the [topic](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. Policy expressions are allowed. | Yes | N/A |
+| ignore-error | If set to `true`, instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. Policy expressions aren't allowed. | No | `false` |
+| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. Policy expressions aren't allowed. | No | None |
+| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. Policy expressions are allowed. | No | 5 |
| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None | | content-type | Type of the message content. "application/json" is the only supported value. | No | None |
api-management Quota By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | - | | - | - |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| counter-key | The key to use for the `quota policy`. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
-| increment-condition | The Boolean expression specifying if the request should be counted towards the quota (`true`) | No | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. When `renewal-period` is set to `0`, the period is set to infinite. | Yes | N/A |
-| first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. | No | `0001-01-01T00:00:00Z` |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed.| Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| counter-key | The key to use for the `quota policy`. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. | Yes | N/A |
+| increment-condition | The Boolean expression specifying if the request should be counted towards the quota (`true`). Policy expressions are allowed. | No | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed. | Yes | N/A |
+| first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. Policy expressions aren't allowed. | No | `0001-01-01T00:00:00Z` |
## Usage
api-management Quota Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | -- | | - | - |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed.| Yes | N/A |
## Elements
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | -- | | - | - | | name | The name of the API for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
-| id | The ID of the API for which to apply the call quota. | Either `name` or `id` must be specified. | N/A |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+| id | The ID of the API for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed.| Yes | N/A |
## operation attributes | Attribute | Description | Required | Default | | -- | | - | - |
-| name | The name of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
-| id | The ID of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+| name | The name of the operation for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
+| id | The ID of the operation for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed.| Yes | N/A |
## Usage
To understand the difference between rate limits and quotas, [see Rate limits an
### Usage notes * This policy can be used only once per policy definition.
-* [Policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy.
* This policy is only applied when an API is accessed using a subscription key.
api-management Rate Limit By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | - | -- | -- | - |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expression is allowed. | Yes | N/A |
-| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
-| increment-condition | The Boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A |
-| increment-count | The number by which the counter is increased per request. | No | 1 |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
-| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
-| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
-| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions are allowed. | Yes | N/A |
+| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. | Yes | N/A |
+| increment-condition | The Boolean expression specifying if the request should be counted towards the rate (`true`). Policy expressions are allowed. | No | N/A |
+| increment-count | The number by which the counter is increased per request. Policy expressions are allowed. | No | 1 |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. Policy expressions are allowed. | Yes | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. Policy expressions aren't allowed. | No | `Retry-After` |
+| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. Policy expressions aren't allowed. | No | N/A |
+| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | No | N/A |
+| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | No | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. Policy expressions aren't allowed. | No | N/A |
## Usage
api-management Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | -- | -- | -- | - |
-| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
-| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
-| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
-| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
-| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. Policy expressions aren't allowed.| Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. Policy expressions aren't allowed. | Yes | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. Policy expressions aren't allowed. | No | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. Policy expressions aren't allowed. | No | `Retry-After` |
+| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. Policy expressions aren't allowed. | No | N/A |
+| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. Policy expressions aren't allowed.| No | N/A |
+| remaining-calls-variable-name | The name of a variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. Policy expressions aren't allowed.| No | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. Policy expressions aren't allowed.| No | N/A |
## Elements
To understand the difference between rate limits and quotas, [see Rate limits an
| -- | -- | -- | - | | name | The name of the API for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A | | id | The ID of the API for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. Policy expressions aren't allowed.| Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. Policy expressions aren't allowed. | Yes | N/A |
### operation attributes
To understand the difference between rate limits and quotas, [see Rate limits an
| -- | -- | -- | - | | name | The name of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A | | id | The ID of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. Policy expressions aren't allowed.| Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. Policy expressions aren't allowed. | Yes | N/A |
## Usage
To understand the difference between rate limits and quotas, [see Rate limits an
### Usage notes * This policy can be used only once per policy definition.
-* Except where noted, [policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy.
* This policy is only applied when an API is accessed using a subscription key. * [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
api-management Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/retry-policy.md
The `retry` policy executes its child policies once and then retries their execu
| Attribute | Description | Required | Default | | - | -- | -- | - |
-| condition | A Boolean literal or [expression](api-management-policy-expressions.md) specifying if retries should be stopped (`false`) or continued (`true`). | Yes | N/A |
-| count | A positive number specifying the maximum number of retries to attempt. | Yes | N/A |
-| interval | A positive number in seconds specifying the wait interval between the retry attempts. | Yes | N/A |
-| max-interval | A positive number in seconds specifying the maximum wait interval between the retry attempts. It is used to implement an exponential retry algorithm. | No | N/A |
-| delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. | No | N/A |
-| first-fast-retry | If set to `true` , the first retry attempt is performed immediately. | No | `false` |
+| condition | Boolean. Specifies whether retries should be stopped (`false`) or continued (`true`). Policy expressions are allowed. | Yes | N/A |
+| count | A positive number specifying the maximum number of retries to attempt. Policy expressions are allowed. | Yes | N/A |
+| interval | A positive number in seconds specifying the wait interval between the retry attempts. Policy expressions are allowed. | Yes | N/A |
+| max-interval | A positive number in seconds specifying the maximum wait interval between the retry attempts. It is used to implement an exponential retry algorithm. Policy expressions are allowed. | No | N/A |
+| delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. Policy expressions are allowed. | No | N/A |
+| first-fast-retry | Boolean. If set to `true`, the first retry attempt is performed immediately. Policy expressions are allowed. | No | `false` |
## Retry wait times
api-management Return Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/return-response-policy.md
The `return-response` policy cancels pipeline execution and returns either a def
| Attribute | Description | Required | Default | | - | | | - |
-| response-variable-name | The name of the context variable referenced from, for example, an upstream [send-request](send-request-policy.md) policy and containing a `Response` object. | No | N/A |
+| response-variable-name | The name of the context variable referenced from, for example, an upstream [send-request](send-request-policy.md) policy and containing a `Response` object. Policy expressions aren't allowed. | No | N/A |
## Elements
api-management Rewrite Uri Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rewrite-uri-policy.md
Previously updated : 12/08/2022 Last updated : 03/28/2023
The `rewrite-uri` policy converts a request URL from its public form to the form
- Request URL - `http://api.example.com/v2/US/hardware/storenumber&ordernumber?City&State`
-This policy can be used when a human and/or browser-friendly URL should be transformed into the URL format expected by the web service. This policy only needs to be applied when exposing an alternative URL format, such as clean URLs, RESTful URLs, user-friendly URLs or SEO-friendly URLs that are purely structural URLs that do not contain a query string and instead contain only the path of the resource (after the scheme and the authority). This is often done for aesthetic, usability, or search engine optimization (SEO) purposes.
+This policy can be used when a human and/or browser-friendly URL should be transformed into the URL format expected by the web service. This policy only needs to be applied when exposing an alternative URL format, such as clean URLs, RESTful URLs, user-friendly URLs or SEO-friendly URLs that are purely structural URLs that don't contain a query string and instead contain only the path of the resource (after the scheme and the authority). This is often done for aesthetic, usability, or search engine optimization (SEO) purposes.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
This policy can be used when a human and/or browser-friendly URL should be trans
|Name|Description|Required|Default| |-|--|--|-|
-|template|The actual web service URL with any query string parameters. When using expressions, the whole value must be an expression.|Yes|N/A|
-|copy-unmatched-params|Specifies whether query parameters in the incoming request not present in the original URL template are added to the URL defined by the rewrite template.|No|`true`|
+|template|The actual web service URL with any query string parameters. Policy expressions are allowed. When expressions are used, the whole value must be an expression. |Yes|N/A|
+|copy-unmatched-params|Specifies whether query parameters in the incoming request not present in the original URL template are added to the URL defined by the rewrite template. Policy expressions are allowed.|No|`true`|
## Usage
This policy can be used when a human and/or browser-friendly URL should be trans
### Usage notes
-You can only add query string parameters using the policy. You cannot add extra template path parameters in the rewrite URL.
+You can only add query string parameters using the policy. You can't add extra template path parameters in the rewrite URL.
## Example
api-management Send One Way Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md
The `send-one-way-request` policy sends the provided request to the specified UR
| Attribute | Description | Required | Default | | - | -- | -- | -- |
-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. | No | `new` |
-| timeout| The timeout interval in seconds before the call to the URL fails. | No | 60 |
+| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
+| timeout| The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 |
## Elements | Element | Description | Required | | -- | -- | - |
-| set-url | The URL of the request. | No if `mode=copy`; otherwise yes. |
-| set-method | A [set-method](set-method-policy.md) policy statement. | No if `mode=copy`; otherwise yes. |
-| set-header | A [set-header](set-header-policy.md) policy statement. Use multiple `set-header` elements for multiple request headers. | No |
-| set-body | A [set-body](set-body-policy.md) policy statement. | No |
+| set-url | The URL of the request. Policy expressions are allowed. | No if `mode=copy`; otherwise yes. |
+| [set-method](set-method-policy.md) | Sets the method of the request. Policy expressions aren't allowed. | No if `mode=copy`; otherwise yes. |
+| [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No |
+| [set-body](set-body-policy.md) | Sets the body of the request. | No |
| authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
The `send-request` policy sends the provided request to the specified URL, waiti
| Attribute | Description | Required | Default | | - | -- | -- | -- |
-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. | No | `new` |
-| response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. | Yes | N/A |
-| timeout | The timeout interval in seconds before the call to the URL fails. | No | 60 |
-| ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. | No | `false` |
+| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
+| response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. Policy expressions are allowed. | Yes | N/A |
+| timeout | The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 |
+| ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. Policy expressions aren't allowed. | No | `false` |
## Elements | Element | Description | Required | | -- | -- | - |
-| set-url | The URL of the request. | No if `mode=copy`; otherwise yes. |
-| set-method | A [set-method](set-method-policy.md) policy statement. | No if `mode=copy`; otherwise yes. |
-| set-header | A [set-header](set-header-policy.md) policy statement. Use multiple `set-header` elements for multiple request headers. | No |
-| set-body | A [set-body](set-body-policy.md) policy statement. | No |
+| set-url | The URL of the request. Policy expressions are allowed. | No if `mode=copy`; otherwise yes. |
+| [set-method](set-method-policy.md) | Sets the method of the request. Policy expressions aren't allowed. | No if `mode=copy`; otherwise yes. |
+| [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No |
+| [set-body](set-body-policy.md) | Sets the body of the request. | No |
| authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No | | proxy | A [proxy](proxy-policy.md) policy statement. Used to route request via HTTP proxy | No |
api-management Set Backend Service Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-dapr-policy.md
The policy assumes that Dapr runs in a sidecar container in the same pod as the
| Attribute | Description | Required | Default | |||-|| | backend-id | Must be set to "dapr". | Yes | N/A |
-| dapr-app-id | Name of the target microservice. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| Yes | N/A |
-| dapr-method | Name of the method or a URL to invoke on the target microservice. Maps to the [method-name](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| Yes | N/A |
-| dapr-namespace | Name of the namespace the target microservice is residing in. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| No | N/A |
+| dapr-app-id | Name of the target microservice. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr. Policy expressions are allowed. | Yes | N/A |
+| dapr-method | Name of the method or a URL to invoke on the target microservice. Maps to the [method-name](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr. Policy expressions are allowed. | Yes | N/A |
+| dapr-namespace | Name of the namespace the target microservice is residing in. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr. Policy expressions are allowed. | No | N/A |
## Usage
api-management Set Backend Service Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md
Use the `set-backend-service` policy to redirect an incoming request to a differ
| Attribute | Description | Required | Default | | -- | | -- | - |
-|base-url|New backend service base URL.|One of `base-url` or `backend-id` must be present.|N/A|
-|backend-id|Identifier (name) of the backend to route primary or secondary replica of a partition. |One of `base-url` or `backend-id` must be present.|N/A|
-|sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution.|No|N/A|
-|sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. |No|N/A|
-|sf-listener-name|Only applicable when the backend is a Service Fabric service and is specified using `backend-id`. Service Fabric Reliable Services allows you to create multiple listeners in a service. This attribute is used to select a specific listener when a backend Reliable Service has more than one listener. If this attribute isn't specified, API Management will attempt to use a listener without a name. A listener without a name is typical for Reliable Services that have only one listener. |No|N/A|
+|base-url|New backend service base URL. Policy expressions are allowed.|One of `base-url` or `backend-id` must be present.|N/A|
+|backend-id|Identifier (name) of the backend to route primary or secondary replica of a partition. Policy expressions are allowed. |One of `base-url` or `backend-id` must be present.|N/A|
+|sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution. Policy expressions are allowed.|No|N/A|
+|sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. Policy expressions are allowed. |No|N/A|
+|sf-partition-key|Only applicable when the backend is a Service Fabric service. Specifies the partition key of a Service Fabric service. Policy expressions are allowed. |No|N/A|
+|sf-listener-name|Only applicable when the backend is a Service Fabric service and is specified using `backend-id`. Service Fabric Reliable Services allows you to create multiple listeners in a service. This attribute is used to select a specific listener when a backend Reliable Service has more than one listener. If this attribute isn't specified, API Management will attempt to use a listener without a name. A listener without a name is typical for Reliable Services that have only one listener. Policy expressions are allowed.|No|N/A|
## Usage
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
Use the `set-body` policy to set the message body for incoming and outgoing requ
| Attribute | Description | Required | Default | | -- | | -- | - |
-|template|Used to change the templating mode that the `set-body` policy will run in. Currently the only supported value is:<br /><br />- `liquid` - the `set-body` policy will use the liquid templating engine |No| N/A|
-|xsi-nil| Used to control how elements marked with `xsi:nil="true"` are represented in XML payloads. Set to one of the following values:<br /><br />- `blank` - `nil` is represented with an empty string.<br />- `null` - `nil` is represented with a null value.|No | `blank` |
+|template|Used to change the templating mode that the `set-body` policy runs in. Currently the only supported value is:<br /><br />- `liquid` - the `set-body` policy will use the liquid templating engine |No| N/A|
+|xsi-nil| Used to control how elements marked with `xsi:nil="true"` are represented in XML payloads. Set to one of the following values:<br /><br />- `blank` - `nil` is represented with an empty string.<br />- `null` - `nil` is represented with a null value.<br/></br>Policy expressions aren't allowed. |No | `blank` |
For accessing information about the request and response, the Liquid template can bind to a context object with the following properties: <br /> <pre>context.
OriginalUrl.
### Usage notes
+ - If you're using the `set-body` policy to return a new or updated body, you don't need to set `preserveContent` to `true` because you're explicitly supplying the new body contents.
+ - Preserving the content of a response in the inbound pipeline doesn't make sense because there's no response yet.
- Preserving the content of a request in the outbound pipeline doesn't make sense because the request has already been sent to the backend at this point.
+ - If this policy is used when there's no message body, for example in an inbound `GET`, an exception is thrown.
For more information, see the `context.Request.Body`, `context.Response.Body`, and the `IMessageBody` sections in the [Context variable](api-management-policy-expressions.md#ContextVariables) table.
The following Liquid filters are supported in the `set-body` policy. For filter
### Accessing the body as a string
-We are preserving the original request body so that we can access it later in the pipeline.
+We're preserving the original request body so that we can access it later in the pipeline.
```xml <set-body>
We are preserving the original request body so that we can access it later in th
### Accessing the body as a JObject
-Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+Since we're not reserving the original request body, accessing it later in the pipeline will result in an exception.
```xml <set-body> 
This example shows how to perform content filtering by removing data elements fr
``` ### Access the body as URL-encoded form data
-The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), and then converts it to JSON. Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), and then converts it to JSON. Since we're not reserving the original request body, accessing it later in the pipeline will result in an exception.
```xml <set-body> 
The following example uses the `AsFormUrlEncodedContent()` expression to access
``` ### Access and return body as URL-encoded form data
-The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), adds data to the payload, and returns URL-encoded form data. Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), adds data to the payload, and returns URL-encoded form data. Since we're not reserving the original request body, accessing it later in the pipeline will result in an exception.
```xml <set-body> 
api-management Set Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-header-policy.md
The `set-header` policy assigns a value to an existing HTTP response and/or requ
|Name|Description|Required|Default| |-|--|--|-|
-|exists-action|Specifies action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing header.<br />- `skip` - does not replace the existing header value.<br />- `append` - appends the value to the existing header value.<br />- `delete` - removes the header from the request.<br /><br /> When set to `override`, enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result.|No|`override`|
-|name|Specifies name of the header to be set.|Yes|N/A|
+|exists-action|Specifies action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing header.<br />- `skip` - does not replace the existing header value.<br />- `append` - appends the value to the existing header value.<br />- `delete` - removes the header from the request.<br /><br /> When set to `override`, enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result. <br/><br/>Policy expressions are allowed.|No|`override`|
+|name|Specifies name of the header to be set. Policy expressions are allowed.|Yes|N/A|
## Elements |Name|Description|Required| |-|--|--|
-|value|Specifies the value of the header to be set. For multiple headers with the same name, add additional `value` elements.|No|
+|value|Specifies the value of the header to be set. Policy expressions are allowed. For multiple headers with the same name, add additional `value` elements.|No|
## Usage
api-management Set Method Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-method-policy.md
The `set-method` policy allows you to change the HTTP request method for a reque
<set-method>HTTP method</set-method> ```
-The value of the element specifies the HTTP method, such as `POST`, `GET`, and so on.
+The value of the element specifies the HTTP method, such as `POST`, `GET`, and so on. Policy expressions are allowed.
## Usage
api-management Set Query Parameter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-query-parameter-policy.md
The `set-query-parameter` policy adds, replaces value of, or deletes request que
|Name|Description|Required|Default| |-|--|--|-|
-|exists-action|Specifies what action to take when the query parameter is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing parameter.<br />- `skip` - does not replace the existing query parameter value.<br />- `append` - appends the value to the existing query parameter value.<br />- `delete` - removes the query parameter from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the query parameter being set according to all entries (which will be listed multiple times); only listed values will be set in the result.|No|`override`|
-|name|Specifies name of the query parameter to be set.|Yes|N/A|
+|exists-action|Specifies what action to take when the query parameter is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing parameter.<br />- `skip` - does not replace the existing query parameter value.<br />- `append` - appends the value to the existing query parameter value.<br />- `delete` - removes the query parameter from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the query parameter being set according to all entries (which will be listed multiple times); only listed values will be set in the result.<br/><br/>Policy expressions are allowed. |No|`override`|
+|name|Specifies name of the query parameter to be set. Policy expressions are allowed. |Yes|N/A|
## Elements |Name|Description|Required| |-|--|--|
-|value|Specifies the value of the query parameter to be set. For multiple query parameters with the same name, add additional `value` elements.|Yes|
+|value|Specifies the value of the query parameter to be set. For multiple query parameters with the same name, add additional `value` elements. Policy expressions are allowed. |Yes|
## Usage
api-management Set Status Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-status-policy.md
The `set-status` policy sets the HTTP status code to the specified value.
| Attribute | Description | Required | Default | | | - | -- | - |
-| code | Integer. The HTTP status code to return. | Yes | N/A |
-| reason | String. A description of the reason for returning the status code. | Yes | N/A |
+| code | Integer. The HTTP status code to return. Policy expressions are allowed. | Yes | N/A |
+| reason | String. A description of the reason for returning the status code. Policy expressions are allowed. | Yes | N/A |
## Usage
api-management Set Variable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-variable-policy.md
The `set-variable` policy declares a [context](api-management-policy-expressions
| Attribute | Description | Required | | | | -- |
-| name | The name of the variable. | Yes |
-| value | The value of the variable. This can be an expression or a literal value. | Yes |
+| name | The name of the variable. Policy expressions aren't allowed. | Yes |
+| value | The value of the variable. This can be an expression or a literal value. Policy expressions are allowed. | Yes |
## Usage
api-management Trace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md
The `trace` policy adds a custom trace into the request tracing output in the te
| Attribute | Description | Required | Default | | | - | -- | - |
-| source | String literal meaningful to the trace viewer and specifying the source of the message. | Yes | N/A |
-| severity | Specifies the severity level of the trace. Allowed values are `verbose`, `information`, `error` (from lowest to highest). | No | `verbose` |
+| source | String literal meaningful to the trace viewer and specifying the source of the message. Policy expressions aren't allowed. | Yes | N/A |
+| severity | Specifies the severity level of the trace. Allowed values are `verbose`, `information`, `error` (from lowest to highest). Policy expressions aren't allowed. | No | `verbose` |
## Elements |Name|Description|Required| |-|--|--|
-| message | A string or expression to be logged. | Yes |
+| message | A string or expression to be logged. Policy expressions are allowed. | Yes |
| metadata | Adds a custom property to the Application Insights [Trace](../azure-monitor/app/data-model-complete.md#trace) telemetry. | No | ### metadata attributes
api-management Validate Client Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-client-certificate-policy.md
For more information about custom CA certificates and certificate authorities, s
| Name | Description | Required | Default | | - | --| -- | -- |
-| validate-revocationΓÇ» | Boolean. Specifies whether certificate is validated against online revocation list.ΓÇ» | No | `true` |
-| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case chain cannot be successfully built up to trusted CA. | No | `true` |
-| validate-not-before | Boolean. Validates value against current time. | NoΓÇ»| `true` |
-| validate-not-afterΓÇ» | Boolean. Validates value against current time. | NoΓÇ»| `true`|
-| ignore-errorΓÇ» | Boolean. Specifies if policy should proceed to the next handler or jump to on-error upon failed validation. | No | `false` |
-| identity | String. Combination of certificate claim values that make certificate valid. | Yes | N/A |
+| validate-revocationΓÇ» | Boolean. Specifies whether certificate is validated against online revocation list. Policy expressions aren't allowed.ΓÇ» | No | `true` |
+| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case chain cannot be successfully built up to trusted CA. Policy expressions aren't allowed. | No | `true` |
+| validate-not-before | Boolean. Validates value against current time. Policy expressions aren't allowed.| NoΓÇ»| `true` |
+| validate-not-afterΓÇ» | Boolean. Validates value against current time. Policy expressions aren't allowed.| NoΓÇ»| `true`|
+| ignore-errorΓÇ» | Boolean. Specifies if policy should proceed to the next handler or jump to on-error upon failed validation. Policy expressions aren't allowed. | No | `false` |
## Elements
api-management Validate Content Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md
The policy validates the following content in the request or response against th
| Attribute | Description | Required | Default | | -- | | -- | - |
-| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
-| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
-| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. Policy expressions are allowed. | Yes | N/A |
+| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) Policy expressions are allowed. | Yes | N/A |
+| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. Policy expressions are allowed.| Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A |
## Elements
The policy validates the following content in the request or response against th
| Attribute | Description | Required | Default | | -- | | -- | - |
-| any-content-type-value | Content type used for validation of the body of a request or response, regardless of the incoming content type. | No | N/A |
-| missing-content-type-value | Content type used for validation of the body of a request or response, when the incoming content type is missing or empty. | No | N/A |
+| any-content-type-value | Content type used for validation of the body of a request or response, regardless of the incoming content type. Policy expressions aren't allowed. | No | N/A |
+| missing-content-type-value | Content type used for validation of the body of a request or response, when the incoming content type is missing or empty. Policy expressions aren't allowed. | No | N/A |
### content-type-map-elements
api-management Validate Graphql Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-graphql-request-policy.md
The `validate-graphql-request` policy validates the GraphQL request and authoriz
| Attribute | Description | Required | Default | | -- | | -- | - |
-| error-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| max-size | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
-| max-depth | An integer. Maximum query depth. | No | 6 |
+| error-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions are allowed. | No | N/A |
+| max-size | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) Policy expressions are allowed. | Yes | N/A |
+| max-depth | An integer. Maximum query depth. Policy expressions are allowed. | No | 6 |
## Elements
api-management Validate Headers Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-headers-policy.md
The `validate-headers` policy validates the response headers against the API sch
| Attribute | Description | Required | Default | | -- | | -- | - |
-| specified-header-action | [Action](#actions) to perform for response headers specified in the API schema. | Yes | N/A |
-| unspecified-header-action | [Action](#actions) to perform for response headers that arenΓÇÖt specified in the API schema. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| specified-header-action | [Action](#actions) to perform for response headers specified in the API schema. Policy expressions are allowed. | Yes | N/A |
+| unspecified-header-action | [Action](#actions) to perform for response headers that arenΓÇÖt specified in the API schema. Policy expressions are allowed. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A |
## Elements
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
The `validate-jwt` policy enforces existence and validity of a supported JSON we
| Attribute | Description | Required | Default | | - | | -- | |
-| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. | No | 401 |
-| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
-| require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. | No | true |
-| require-scheme | The name of the token scheme, for example, "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A |
-| require-signed-tokens | Boolean. Specifies whether a token is required to be signed. | No | true |
-| clock-skew | Timespan. Use to specify maximum expected time difference between the system clocks of the token issuer and the API Management instance. | No | 0 seconds |
-| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
+| header-name | The name of the HTTP header holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| query-parameter-name | The name of the query parameter holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. Policy expressions are allowed. | No | 401 |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. Policy expressions are allowed. | No | Default error message depends on validation issue, for example "JWT not present." |
+| require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. Policy expressions are allowed. | No | true |
+| require-scheme | The name of the token scheme, for example, "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. Policy expressions are allowed. | No | N/A |
+| require-signed-tokens | Boolean. Specifies whether a token is required to be signed. Policy expressions are allowed. | No | true |
+| clock-skew | Timespan. Use to specify maximum expected time difference between the system clocks of the token issuer and the API Management instance. Policy expressions are allowed. | No | 0 seconds |
+| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation. Policy expressions aren't allowed. | No | N/A |
api-management Validate Parameters Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-parameters-policy.md
The `validate-parameters` policy validates the header, query, or path parameters
| Attribute | Description | Required | Default | | -- | | -- | - |
-| specified-parameter-action | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
-| unspecified-parameter-action | [Action](#actions) to perform for request parameters that arenΓÇÖt specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| name | Name of the parameter to override validation action for. This value is case insensitive. | Yes | N/A |
-| action | [Action](#actions) to perform for the parameter with the matching name. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration.| Yes | N/A |
+| specified-parameter-action | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. Policy expressions are allowed. | Yes | N/A |
+| unspecified-parameter-action | [Action](#actions) to perform for request parameters that arenΓÇÖt specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. Policy expressions are allowed. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A |
## Elements |Name|Description|Required| |-|--|--|
-| headers | Add this element to override default validation [actions](#actions) for header parameters in requests. | No |
-| query | Add this element to override default validation [actions](#actions) for query parameters in requests. | No |
-| path | Add this element to override default validation [actions](#actions) for URL path parameters in requests. | No |
-| parameter | Add one or more elements for named parameters to override higher-level configuration of the validation [actions](#actions). | No |
+| headers | Add this element and one or more `parameter` subelements to override default validation [actions](#actions) for certain named parameters in requests. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration. | No |
+| query | Add this element and one or more `parameter` subelements to override default validation [actions](#actions) for certain named query parameters in requests. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration. | No |
+| path | Add this element and one or more `parameter` subelements to override default validation [actions](#actions) for certain URL path parameters in requests. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration. | No |
[!INCLUDE [api-management-validation-policy-actions](../../includes/api-management-validation-policy-actions.md)]
api-management Validate Status Code Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-status-code-policy.md
The `validate-status-code` policy validates the HTTP status codes in responses a
| Attribute | Description | Required | Default | | -- | | -- | - |
-| unspecified-status-code-action | [Action](#actions) to perform for HTTP status codes in responses that arenΓÇÖt specified in the API schema. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| unspecified-status-code-action | [Action](#actions) to perform for HTTP status codes in responses that arenΓÇÖt specified in the API schema. Policy expressions are allowed. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A |
## Elements
api-management Wait Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/wait-policy.md
The `wait` policy executes its immediate child policies in parallel, and waits f
| Attribute | Description | Required | Default | | -- | | -- | - |
-| for | Determines whether the `wait` policy waits for all immediate child policies to be completed or just one. Allowed values are:<br /><br /> - `all` - wait for all immediate child policies to complete<br />- `any` - wait for any immediate child policy to complete. Once the first immediate child policy has completed, the `wait` policy completes and execution of any other immediate child policies is terminated. | No | `all` |
+| for | Determines whether the `wait` policy waits for all immediate child policies to be completed or just one. Allowed values are:<br /><br /> - `all` - wait for all immediate child policies to complete<br />- `any` - wait for any immediate child policy to complete. Once the first immediate child policy has completed, the `wait` policy completes and execution of any other immediate child policies is terminated.<br/><br/>Policy expressions are allowed. | No | `all` |
## Elements
api-management Xml To Json Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xml-to-json-policy.md
The `xml-to-json` policy converts a request or response body from XML to JSON. T
| Attribute | Description | Required | Default | | -- | | -- | - |
-|kind|The attribute must be set to one of the following values.<br /><br /> - `javascript-friendly` - the converted JSON has a form friendly to JavaScript developers.<br />- `direct` - the converted JSON reflects the original XML document's structure.|Yes|N/A|
-|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - convert always.<br />- `content-type-xml` - convert only if response Content-Type header indicates presence of XML.|Yes|N/A|
-|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if JSON is requested in request Accept header.<br />- `false` -always apply conversion.|No|`true`|
+|kind|The attribute must be set to one of the following values.<br /><br /> - `javascript-friendly` - the converted JSON has a form friendly to JavaScript developers.<br />- `direct` - the converted JSON reflects the original XML document's structure.<br/><br/>Policy expressions are allowed.|Yes|N/A|
+|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - convert always.<br />- `content-type-xml` - convert only if response Content-Type header indicates presence of XML.<br/><br/>Policy expressions are allowed.|Yes|N/A|
+|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if JSON is requested in request Accept header.<br />- `false` -always apply conversion.<br/><br/>Policy expressions are allowed.|No|`true`|
## Usage
api-management Xsl Transform Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xsl-transform-policy.md
Previously updated : 08/26/2022 Last updated : 03/28/2023
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
Title: 'App Service Environment version comparison' description: This article provides an overview of the App Service Environment versions and feature differences between them. Previously updated : 3/20/2023 Last updated : 3/30/2023
App Service Environment has three versions. App Service Environment v3 is the la
|Network watcher or NSG flow logs to monitor traffic |Yes |Yes |Yes | |Subnet delegation |Not required |Not required |[Must be delegated to `Microsoft.Web/hostingEnvironments`](networking.md#subnet-requirements) | |Subnet size|An App Service Environment v1 with no App Service plans uses 12 addresses before you create an app. If you use an ILB App Service Environment v1, then it uses 13 addresses before you create an app. As you scale out, infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances. |An App Service Environment v2 with no App Service plans uses 12 addresses before you create an app. If you use an ILB App Service Environment v2, then it uses 13 addresses before you create an app. As you scale out, infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances. |Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment v3 dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet can be a /27 address space (32 addresses). |
+|DNS fallback |Azure DNS |Azure DNS |[Ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers](migrate.md#migration-feature-limitations) |
### Scaling
application-gateway Application Gateway Private Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-private-deployment.md
+
+ Title: Private Application Gateway deployment (preview)
+
+description: Learn how to restrict access to Application Gateway
++++ Last updated : 03/01/2023+
+#Customer intent: As an administrator, I want to evaluate Azure Private Application Gateway
++
+# Private Application Gateway deployment (preview)
+
+## Introduction
+
+Historically, Application Gateway v2 SKUs, and to a certain extent v1, have required public IP addressing to enable management of the service. This requirement has imposed several limitations in using fine-grain controls in Network Security Groups and Route Tables. Specifically, the following challenges have been observed:
+
+1. All Application Gateways v2 deployments must contain public facing frontend IP configuration to enable communication to the **Gateway Manager** service tag.
+2. Network Security Group associations require rules to allow inbound access from GatewayManager and Outbound access to Internet.
+3. When introducing a default route (0.0.0.0/0) to forward traffic anywhere other than the Internet, metrics, monitoring, and updates of the gateway result in a failed status.
+
+Application Gateway v2 can now address each of these items to further eliminate risk of data exfiltration and control privacy of communication from within the virtual network. These changes include the following capabilities:
+
+1. Private IP address only frontend IP configuration
+ - No public IP address resource required
+2. Elimination of inbound traffic from GatewayManager service tag via Network Security Group
+3. Ability to define a **Deny All** outbound Network Security Group (NSG) rule to restrict egress traffic to the Internet
+4. Ability to override the default route to the Internet (0.0.0.0/0)
+5. DNS resolution via defined resolvers on the virtual network [Learn more](../virtual-network/manage-virtual-network.md#change-dns-servers), including private link private DNS zones.
+
+Each of these features can be configured independently. For example, a public IP address can be used to allow traffic inbound from the Internet and you can define a **_Deny All_** outbound rule in the network security group configuration to prevent data exfiltration.
+
+## Onboard to public preview
+
+The functionality of the new controls of private IP frontend configuration, control over NSG rules, and control over route tables, are currently in public preview. To join the public preview, you can opt in to the experience using the Azure portal, PowerShell, CLI, or REST API.
+
+When you join the preview, all new Application Gateways will provision with the ability to define any combination of the NSG, Route Table, or private IP configuration features. If you wish to opt out from the new functionality and return to the current generally available functionality of Application Gateway, you can do so by [unregistering from the preview](#unregister-from-the-preview).
+
+For more information about preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md)
+
+## Register to the preview
+
+# [Azure Portal](#tab/portal)
+
+Use the following steps to enroll into the public preview for the enhanced Application Gateway network controls via the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. In the search box, enter _subscriptions_ and select **Subscriptions**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="Azure portal search.":::
+
+3. Select the link for your subscription's name.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="Select Azure subscription.":::
+
+4. From the left menu, under **Settings** select **Preview features**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="Azure preview features menu.":::
+
+5. You see a list of available preview features and your current registration status.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="Azure portal list of preview features.":::
+
+6. From **Preview features** type into the filter box **EnableApplicationGatewayNetworkIsolation**, check the feature, and click **Register**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="Azure portal filter preview features.":::
+
+# [Azure PowerShell](#tab/powershell)
+
+To enroll into the public preview for the enhanced Application Gateway network controls via Azure PowerShell, the following commands can be referenced:
+
+```azurepowershell
+Register-AzProviderFeature -FeatureName "EnableApplicationGatewayNetworkIsolation" -ProviderNamespace "Microsoft.Network"
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+FeatureName ProviderName RegistrationState
+-- --
+EnableApplicationGatewayNetworkIsolation Microsoft.Network Registered
+```
+
+# [Azure CLI](#tab/cli)
+
+To enroll into the public preview for the enhanced Application Gateway network controls via Azure CLI, the following commands can be referenced:
+
+```azurecli
+az feature register --name EnableApplicationGatewayNetworkIsolation --namespace Microsoft.Network
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+Name RegistrationState
+- -
+Microsoft.Network/EnableApplicationGatewayNetworkIsolation Registered
+```
+
+A list of all Azure CLI references for Private Link Configuration on Application Gateway can be found here: [Azure CLI CLI - Private Link](/cli/azure/network/application-gateway/private-link)
+++
+For more information about preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md)
+
+## Unregister from the preview
+
+# [Azure Portal](#tab/portal)
+
+To opt out of the public preview for the enhanced Application Gateway network controls via Portal, use the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. In the search box, enter _subscriptions_ and select **Subscriptions**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="Azure portal search.":::
+
+3. Select the link for your subscription's name.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="Select Azure subscription.":::
+
+4. From the left menu, under **Settings** select **Preview features**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="Azure preview features menu.":::
+
+5. You see a list of available preview features and your current registration status.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="Azure portal list of preview features.":::
+
+6. From **Preview features** type into the filter box **EnableApplicationGatewayNetworkIsolation**, check the feature, and click **Unregister**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="Azure portal filter preview features.":::
+
+# [Azure PowerShell](#tab/powershell)
+
+To opt out of the public preview for the enhanced Application Gateway network controls via Azure PowerShell, the following commands can be referenced:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName "EnableApplicationGatewayNetworkIsolation" -ProviderNamespace "Microsoft.Network"
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+FeatureName ProviderName RegistrationState
+-- --
+EnableApplicationGatewayNetworkIsolation Microsoft.Network Unregistered
+```
+
+# [Azure CLI](#tab/cli)
+
+To opt out of the public preview for the enhanced Application Gateway network controls via Azure CLI, the following commands can be referenced:
+
+```azurecli
+az feature unregister --name EnableApplicationGatewayNetworkIsolation --namespace Microsoft.Network
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+Name RegistrationState
+- -
+Microsoft.Network/EnableApplicationGatewayNetworkIsolation Unregistered
+```
+
+A list of all Azure CLI references for Private Link Configuration on Application Gateway can be found here: [Azure CLI CLI - Private Link](/cli/azure/network/application-gateway/private-link)
+++
+## Regions and availability
+
+The Private Application Gateway preview is available to all public cloud regions [where Application Gateway v2 sku is supported](./overview-v2.md#unsupported-regions).
+
+## Configuration of network controls
+
+After registration into the public preview, configuration of NSG, Route Table, and private IP address frontend configuration can be performed using any methods. For example: REST API, ARM Template, Bicep deployment, Terraform, PowerShell, CLI, or Portal. No API or command changes are introduced with this public preview.
+
+## Resource Changes
+
+After your gateway is provisioned, isn't tag is automatically assigned with the name of **EnhancedNetworkControl** and value of **True**. See the following example:
+
+ ![View the EnhancedNetworkControl tag](./media/application-gateway-private-deployment/tags.png)
+
+The resource tag is cosmetic, and serves to confirm that the gateway has been provisioned with the capabilities to configure any combination of the private only gateway features. Modification or deletion of the tag or value doesn't change any functional workings of the gateway.
+
+> [!TIP]
+> The **EnhancedNetworkControl** tag can be helpful when existing Application Gateways were deployed in the subscription prior to feature enablement and you would like to differentiate which gateway can utilize the new functionality.
+
+## Outbound Internet connectivity
+
+Application Gateway deployments that contain only a private frontend IP configuration (do not have a public IP frontend configuration) are not able to egress traffic destined to the Internet. This configuration affects communication to backend targets that are publicly accessible via the Internet.
+
+To enable outbound connectivity from your Application Gateway to an Internet facing backend target, you can utilize [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) or forward traffic to a virtual appliance that has access to the Internet.
+
+Virtual Network NAT offers control over what IP address or prefix should be used as well as configurable idle-timeout. To configure, create a new NAT Gateway with a public IP address or public prefix and associate it with the subnet containing Application Gateway.
+
+If a virtual appliance is required for Internet egress, see the [route table control](#route-table-control) section in this document.
+
+Common scenarios where public IP usage is required:
+- Communication to key vault without use of private endpoints or service endpoints
+ - Outbound communication isn't required for pfx files uploaded to Application Gateway directly
+- Communication to backend targets via Internet
+- Communication to Internet facing CRL or OCSP endpoints
+
+## Network Security Group Control
+
+Network security groups associated to an Application Gateway subnet no longer require inbound rules for GatewayManager, and they don't require outbound access to the Internet. The only required rule is **Allow inbound from AzureLoadBalancer** to ensure health probes can reach the gateway.
+
+The following configuration is an example of the most restrictive set of inbound rules, denying all traffic but Azure health probes. In addition to the defined rules, explicit rules are defined to allow client traffic to reach the listener of the gateway.
+
+ [ ![View the inbound security group rules](./media/application-gateway-private-deployment/inbound-rules.png) ](./media/application-gateway-private-deployment/inbound-rules.png#lightbox)
+
+> [!Note]
+> Application Gateway will display an alert asking to ensure the **Allow LoadBalanceRule** is specified if a **DenyAll** rule inadvertently restricts access to health probes.
+
+### Example scenario
+
+This example walks through creation of an NSG using the Azure portal with the following rules:
+
+- Allow inbound traffic to port 80 and 8080 to Application Gateway from client requests originating from the Internet
+- Deny all other inbound traffic
+- Allow outbound traffic to a backend target in another virtual network
+- Allow outbound traffic to a backend target that is Internet accessible
+- Deny all other outbound traffic
+
+First, [create a network security group](../virtual-network/tutorial-filter-network-traffic.md#create-a-network-security-group). This security group contains your inbound and outbound rules.
+
+#### Inbound rules
+
+Three inbound [default rules](../virtual-network/network-security-groups-overview.md#default-security-rules) are already provisioned in the security group. See the following example:
+
+ [ ![View default security group rules](./media/application-gateway-private-deployment/default-rules.png) ](./media/application-gateway-private-deployment/default-rules.png#lightbox)
+
+Next, create the following four new inbound security rules:
+
+- Allow inbound port 80, tcp, from Internet (any)
+- Allow inbound port 8080, tcp, from Internet (any)
+- Allow inbound from AzureLoadBalancer
+- Deny Any Inbound
+
+To create these rules:
+- Select **Inbound security rules**
+- Select **Add**
+- Enter the following information for each rule into the **Add inbound security rule** pane.
+- When you've entered the information, select **Add** to create the rule.
+- Creation of each rule takes a moment.
+
+| Rule # | Source | Source service tag | Source port ranges | Destination | Service | Dest port ranges | Protocol | Action | Priority | Name |
+| | -- | | | -- | - | - | -- | | -- | - |
+| 1 | Any | | * | Any | HTTP | 80 | TCP | Allow | 1028 | AllowWeb |
+| 2 | Any | | * | Any | Custom | 8080 | TCP | Allow | 1029 | AllowWeb8080 |
+| 3 | Service Tag | AzureLoadBalancer | * | Any | Custom | * | Any | Allow | 1045 | AllowLB |
+| 4 | Any | | * | Any | Custom | * | Any | Deny | 4095 | DenyAllInbound |
++
+Select **Refresh** to review all rules when provisioning is complete.
+
+ [ ![View example inbound security group rules](./media/application-gateway-private-deployment/inbound-example.png) ](./media/application-gateway-private-deployment/inbound-example.png#lightbox)
+
+#### Outbound rules
+
+Three default outbound rules with priority 65000, 65001, and 65500 are already provisioned.
+
+Create the following three new outbound security rules:
+
+- Allow TCP 443 from 10.10.4.0/24 to backend target 20.62.8.49
+- Allow TCP 80 from source 10.10.4.0/24 to destination 10.13.0.4
+- DenyAll traffic rule
+
+These rules are assigned a priority of 400, 401, and 4096, respectively.
+
+> [!NOTE]
+> - 10.10.4.0/24 is the Application Gateway subnet address space.
+> - 10.13.0.4 is a virtual machine in a peered VNet.
+> - 20.63.8.49 is a backend target VM.
+
+To create these rules:
+- Select **Outbound security rules**
+- Select **Add**
+- Enter the following information for each rule into the **Add outbound security rule** pane.
+- When you've entered the information, select **Add** to create the rule.
+- Creation of each rule takes a moment.
+
+| Rule # | Source | Source IP addresses/CIDR ranges | Source port ranges | Destination | Destination IP addresses/CIDR ranges | Service | Dest port ranges | Protocol | Action | Priority | Name |
+| | | - | | | | - | - | -- | | -- | -- |
+| 1 | IP Addresses | 10.10.4.0/24 | * | IP Addresses | 20.63.8.49 | HTTPS | 443 | TCP | Allow | 400 | AllowToBackendTarget |
+| 2 | IP Addresses | 10.10.4.0/24 | * | IP Addresses | 10.13.0.4 | HTTP | 80 | TCP | Allow | 401 | AllowToPeeredVnetVM |
+| 3 | Any | | * | Any | | Custom | * | Any | Deny | 4096 | DenyAll |
+
+Select **Refresh** to review all rules when provisioning is complete.
+
+[ ![View example outbound security group rules](./media/application-gateway-private-deployment/outbound-example.png) ](./media/application-gateway-private-deployment/outbound-example.png#lightbox)
+
+#### Associate NSG to the subnet
+
+The last step is to [associate the network security group to the subnet](../virtual-network/tutorial-filter-network-traffic.md#associate-network-security-group-to-subnet) that contains your Application Gateway.
+
+![Associate NSG to subnet](./media/application-gateway-private-deployment/nsg-subnet.png)
+
+Result:
+
+[ ![View the NSG overview](./media/application-gateway-private-deployment/nsg-overview.png) ](./media/application-gateway-private-deployment/nsg-overview.png#lightbox)
+
+> [!IMPORTANT]
+> Be careful when you define **DenyAll** rules, as you might inadvertently deny inbound traffic from clients to which you intend to allow access. You might also inadvertently deny outbound traffic to the backend target, causing backend health to fail and produce 5XX responses.
+
+## Route Table Control
+
+In the current offering of Application Gateway, association of a route table with a rule (or creation of rule) defined as 0.0.0.0/0 with a next hop as virtual appliance is unsupported to ensure proper management of Application Gateway.
+
+After registration of the public preview feature, the ability to forward traffic to a virtual appliance is now possible via definition of a route table rule that defines 0.0.0.0/0 with a next hop to Virtual Appliance.
+
+Forced Tunneling or learning of 0.0.0.0/0 route through BGP advertising does not affect Application Gateway health, and is honored for traffic flow. This scenario can be applicable when using VPN, ExpressRoute, Route Server, or Virtual WAN.
+
+### Example scenario
+
+In the following example, we create a route table and associate it to the Application Gateway subnet to ensure outbound Internet access from the subnet will egress from a virtual appliance. At a high level, the following design is summarized in Figure 1:
+- The Application Gateway is in spoke virtual network
+- There is a network virtual appliance (a virtual machine) in the hub network
+- A route table with a default route (0.0.0.0/0) to the virtual appliance is associated to Application Gateway subnet
+
+![Diagram for example route table](./media/application-gateway-private-deployment/route-table-diagram.png)
+
+**Figure 1**: Internet access egress through virtual appliance
+
+To create a route table and associate it to the Application Gateway subnet:
+
+1. [Create a route table](../virtual-network/manage-route-table.md#create-a-route-table):
+
+ ![View the newly created route table](./media/application-gateway-private-deployment/route-table-create.png)
+
+2. Select **Routes** and create the next hop rule for 0.0.0.0/0 and configure the destination to be the IP address of your VM:
+
+ [ ![View of adding default route to network virtual applicance](./media/application-gateway-private-deployment/default-route-nva.png) ](./media/application-gateway-private-deployment/default-route-nva.png#lightbox)
+
+3. Select **Subnets** and associate the route table to the Application Gateway subnet:
+
+ [ ![View of associating the route to the AppGW subnet](./media/application-gateway-private-deployment/associate-route-to-subnet.png) ](./media/application-gateway-private-deployment/associate-route-to-subnet.png#lightbox)
+
+4. Validate that traffic is passing through the virtual appliance.
+
+## Limitations / Known Issues
+
+While in public preview, the following limitations are known.
+
+### Private link configuration (preview)
+
+[Private link configuration](private-link.md) support for tunneling traffic through private endpoints to Application Gateway is unsupported with private only gateway.
+
+### Private IP frontend configuration only with AGIC
+
+AGIC v1.7 must be used to introduce support for private frontend IP only.
+
+### Private Endpoint connectivity via Global VNet Peering
+
+If Application Gateway has a backend target or key vault reference to a private endpoint located in a VNet that is accessible via global VNet peering, traffic is dropped, resulting in an unhealthy status.
+
+### Coexisting v2 Application Gateways created prior to enablement of enhanced network control
+
+If a subnet shares Application Gateway v2 deployments that were created both prior to and after enablement of the enhanced network control functionality, Network Security Group (NSG) and Route Table functionality is limited to the prior gateway deployment. Application gateways provisioned prior to enablement of the new functionality must either be reprovisioned, or newly created gateways must use a different subnet to enable enhanced network security group and route table features.
+
+- If a gateway deployed prior to enablement of the new functionality exists in the subnet, you might see errors such as: `For routes associated to subnet containing Application Gateway V2, please ensure '0.0.0.0/0' uses Next Hop Type as 'Internet'` when adding route table entries.
+- When adding network security group rules to the subnet, you might see: `Failed to create security rule 'DenyAnyCustomAnyOutbound'. Error: Network security group \<NSG-name\> blocks outgoing Internet traffic on subnet \<AppGWSubnetId\>, associated with Application Gateway \<AppGWResourceId\>. This isn't permitted for Application Gateways that have fast update enabled or have V2 Sku.`
+
+### Unknown Backend Health status
+
+If backend health is _Unknown_, you may see the following error:
+ + The backend health status could not be retrieved. This happens when an NSG/UDR/Firewall on the application gateway subnet is blocking traffic on ports 65503-65534 in case of v1 SKU, and ports 65200-65535 in case of the v2 SKU or if the FQDN configured in the backend pool could not be resolved to an IP address. To learn more visit - https://aka.ms/UnknownBackendHealth.
+
+This error can be ignored and will be clarified in a future release.
+
+## Next steps
+
+- See [Azure security baseline for Application Gateway](/security/benchmark/azure/baselines/application-gateway-security-baseline.md) for more security best practices.
+
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
Azure Application Gateway by default monitors the health of all resources in its backend pool and automatically removes any resource considered unhealthy from the pool. Application Gateway continues to monitor the unhealthy instances and adds them back to the healthy backend pool once they become available and respond to health probes. By default, Application gateway sends the health probes with the same port that is defined in the backend HTTP settings. A custom probe port can be configured using a custom health probe.
-The source IP address that Application Gateway uses for health probes will depend on the backend pool:
+The source IP address that Application Gateway uses for health probes depend on the backend pool:
- If the server address in the backend pool is a public endpoint, then the source address is the application gateway's frontend public IP address. - If the server address in the backend pool is a private endpoint, then the source IP address is from the application gateway subnet's private IP address space.
-![application gateway probe example][1]
In addition to using default health probe monitoring, you can also customize the health probe to suit your application's requirements. In this article, both default and custom health probes are covered.
In addition to using default health probe monitoring, you can also customize the
An application gateway automatically configures a default health probe when you don't set up any custom probe configuration. The monitoring behavior works by making an HTTP GET request to the IP addresses or FQDN configured in the backend pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS to test health of the backend servers.
-For example: You configure your application gateway to use backend servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like `http://127.0.0.1/`.
+For example: You configure your application gateway to use backend servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30-second-timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe looks like `http://127.0.0.1/`.
If the default probe check fails for server A, the application gateway stops forwarding requests to this server. The default probe still continues to check for server A every 30 seconds. When server A responds successfully to one request from a default health probe, application gateway starts forwarding the requests to the server again.
The following table provides definitions for the properties of a custom health p
| Probe property | Description | | | | | Name |Name of the probe. This name is used to identify and refer to the probe in backend HTTP settings. |
-| Protocol |Protocol used to send the probe. This has to match with the protocol defined in the backend HTTP settings it is associated to|
-| Host |Host name to send the probe with. In v1 SKU, this value will be used only for the host header of the probe request. In v2 SKU, it will be used both as host header as well as SNI |
+| Protocol |Protocol used to send the probe. This has to match with the protocol defined in the backend HTTP settings it's associated to|
+| Host |Host name to send the probe with. In v1 SKU, this value is used only for the host header of the probe request. In v2 SKU, it is used both as host header and SNI |
| Path |Relative path of the probe. A valid path starts with '/' |
-| Port |If defined, this is used as the destination port. Otherwise, it uses the same port as the HTTP settings that it is associated to. This property is only available in the v2 SKU
+| Port |If defined, this is used as the destination port. Otherwise, it uses the same port as the HTTP settings that it's associated to. This property is only available in the v2 SKU
| Interval |Probe interval in seconds. This value is the time interval between two consecutive probes | | Time-out |Probe time-out in seconds. If a valid response isn't received within this time-out period, the probe is marked as failed | | Unhealthy threshold |Probe retry count. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold |
Once the match criteria is specified, it can be attached to probe configuration
## NSG considerations
+Fine grain control over the Application Gateway subnet via NSG rules is possible in public preview. More details can be found [here](application-gateway-private-deployment.md#network-security-group-control).
+
+With current functionality there are some restrictions:
+ You must allow incoming Internet traffic on TCP ports 65503-65534 for the Application Gateway v1 SKU, and TCP ports 65200-65535 for the v2 SKU with the destination subnet as **Any** and source as **GatewayManager** service tag. This port range is required for Azure infrastructure communication. Additionally, outbound Internet connectivity can't be blocked, and inbound traffic coming from the **AzureLoadBalancer** tag must be allowed.
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
You can configure the application gateway to have a public IP address, a private
## Public and private IP address support
-Application Gateway V2 currently doesn't support only private IP mode. It supports the following combinations:
+Application Gateway V2 currently supports the following combinations:
* Private IP address and public IP address * Public IP address only
+* [Private IP address only (preview)](application-gateway-private-deployment.md)
For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address).
-A public IP address isn't required for an internal endpoint that's not exposed to the Internet. That's known as an *internal load-balancer* (ILB) endpoint or private frontend IP. An application gateway ILB is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers in a multi-tier application within a security boundary that aren't exposed to the Internet but that require round-robin load distribution, session stickiness, or TLS termination.
+A public IP address isn't required for an internal endpoint that's not exposed to the Internet. A private frontend configuration is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers in a multi-tier application within a security boundary that aren't exposed to the Internet but that require round-robin load distribution, session stickiness, or TLS termination.
Only one public IP address and one private IP address is supported. You choose the frontend IP when you create the application gateway.
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
By visiting Azure Advisor for your account, you can verify if your subscription
## Network security groups
-Network security groups (NSGs) are supported on Application Gateway. But there are some restrictions:
+Network security groups (NSGs) are supported on Application Gateway.
+
+Fine grain control over the Application Gateway subnet via NSG rules is possible in public preview. More details can be found [here](application-gateway-private-deployment.md#network-security-group-control).
+
+With current functionality there are some restrictions:
- You must allow incoming Internet traffic on TCP ports 65503-65534 for the Application Gateway v1 SKU, and TCP ports 65200-65535 for the v2 SKU with the destination subnet as **Any** and source as **GatewayManager** service tag. This port range is required for Azure infrastructure communication. These ports are protected (locked down) by Azure certificates. External entities, including the customers of those gateways, can't communicate on these endpoints.
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Supported user-defined routes
+Fine grain control over the Application Gateway subnet via Route Table rules is possible in public preview. More details can be found [here](application-gateway-private-deployment.md#route-table-control).
+
+With current functionality there are some restrictions:
+ > [!IMPORTANT] > Using UDRs on the Application Gateway subnet might cause the health status in the [backend health view](./application-gateway-diagnostics.md#backend-health) to appear as **Unknown**. It also might cause generation of Application Gateway logs and metrics to fail. We recommend that you don't use UDRs on the Application Gateway subnet so that you can view the backend health, logs, and metrics.
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Next steps - [Learn about frontend IP address configuration](configuration-frontend-ip.md).
+- [Learn about private Application Gateway deployment](application-gateway-private-deployment.md).
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The following table compares the features available with each SKU.
| Azure Kubernetes Service (AKS) Ingress controller | | &#x2713; | | Azure Key Vault integration | | &#x2713; | | Rewrite HTTP(S) headers | | &#x2713; |
+| Enhanced Network Control (NSG, Route Table, Private IP Frontend only) | | &#x2713; |
| URL-based routing | &#x2713; | &#x2713; | | Multiple-site hosting | &#x2713; | &#x2713; | | Mutual Authentication (mTLS) | | &#x2713; |
This section describes features and limitations of the v2 SKU that differ from t
|Difference|Details| |--|--|
-|Authentication certificate|Not supported.<br>For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku).|
|Mixing Standard_v2 and Standard Application Gateway on the same subnet|Not supported|
-|User-Defined Route (UDR) on Application Gateway subnet|Supported (specific scenarios). In preview.<br> For more information about supported scenarios, see [Application Gateway configuration overview](configuration-infrastructure.md#supported-user-defined-routes).|
-|NSG for Inbound port range| - 65200 to 65535 for Standard_v2 SKU<br>- 65503 to 65534 for Standard SKU.<br>For more information, see the [FAQ](application-gateway-faq.yml#are-network-security-groups-supported-on-the-application-gateway-subnet).|
+|User-Defined Route (UDR) on Application Gateway subnet|For information about supported scenarios, see [Application Gateway configuration overview](configuration-infrastructure.md#supported-user-defined-routes).|
+|NSG for Inbound port range| - 65200 to 65535 for Standard_v2 SKU<br>- 65503 to 65534 for Standard SKU.<br>Not required for v2 SKUs in public preview [Learn more](application-gateway-private-deployment.md).<br>For more information, see the [FAQ](application-gateway-faq.yml#are-network-security-groups-supported-on-the-application-gateway-subnet).|
|Performance logs in Azure diagnostics|Not supported.<br>Azure metrics should be used.|
-|Billing|Billing scheduled to start on July 1, 2019.|
-|FIPS mode|These are currently not supported.|
-|ILB only mode|This is currently not supported. Public and ILB mode together is supported.|
-|Net watcher integration|Not supported.|
+|FIPS mode|Currently not supported.|
+|Private frontend configuration only mode|Currently in public preview [Learn more](application-gateway-private-deployment.md).|
+|Azure Network Watcher integration|Not supported.|
|Microsoft Defender for Cloud integration|Not yet available. ## Migrate from v1 to v2
applied-ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-add-on-capabilities.md
recommendations: false
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-> [!IMPORTANT]
->
-> * The Form Recognizer Studio add-on capabilities feature is currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-> * Complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
- > [!NOTE] > > Add-on capabilities for Form Recognizer Studio are only available within the Read and Layout models for the `2023-02-28-preview` release.
applied-ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md
Previously updated : 03/03/2023 Last updated : 03/30/2023 monikerRange: 'form-recog-3.0.0' recommendations: false
The Form Recognizer Studio provides and orchestrates all the API calls required
In your project, you only need to label each document with the appropriate class label. You see the files you uploaded to storage in the file list, ready to be labeled. You have a few options to label your dataset.
Congratulations you've trained a custom classification model in the Form Recogni
The [classification model](../concept-custom-classifier.md) requires results from the [layout model](../concept-layout.md) for each training document. If you haven't provided the layout results, the Studio attempts to run the layout model for each document prior to training the classifier. This process is throttled and can result in a 429 response.
-In the Studiio, prior to training with the classification model, run the [layout model](https://formrecognizer.appliedai.azure.com/studio/layout) on each document and upload it to the same location as the original document. Once the layout results are added, you can train the classifier model with your documents.
+In the Studio, prior to training with the classification model, run the [layout model](https://formrecognizer.appliedai.azure.com/studio/layout) on each document and upload it to the same location as the original document. Once the layout results are added, you can train the classifier model with your documents.
## Next steps
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Title: Deploy an agent-based Linux Hybrid Runbook Worker in Automation
description: This article tells how to install an agent-based Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. Previously updated : 03/15/2023 Last updated : 03/30/2023
sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/
## <a name="remove-linux-hybrid-runbook-worker"></a>Remove the Hybrid Runbook Worker
-You can use the command `ls /var/opt/microsoft/omsagent` on the Hybrid Runbook Worker to get the workspace ID. A folder is created that is named with the workspace ID.
+Run the following commands on agent-based Linux Hybrid Worker:
-```bash
-sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessKey>" --groupname="Example" --workspaceid="<workspaceId>"
-```
+1. ```python
+ sudo bash
+ ```
+
+1. ```python
+ rm -r /home/nxautomation
+ ```
+1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers**.
+1. Select the checkbox next to the machine(s) you want to delete from the hybrid worker group.
+1. Select **Delete** to remove the agent-based Linux Hybrid Worker.
++
+ > [!NOTE]
+ > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role.
+ > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+ > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
-> [!NOTE]
-> - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br>
-> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
-> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
## Remove a Hybrid Worker group
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
Modules that are installed must be in a location referenced by the `PSModulePath
## <a name="remove-windows-hybrid-runbook-worker"></a>Remove the Hybrid Runbook Worker
-1. In the Azure portal, go to your Automation account.
+1. Open PowerShell session in Administrator mode and run the following command:
-1. Under **Account Settings**, select **Keys** and note the values for **URL** and **Primary Access Key**.
+ ```powershell-interactive
+ Remove-Item -Path "HKLM:\SOFTWARE\Microsoft\HybridRunbookWorker\<AutomationAccountID>\<HybridWorkerGroupName>" -Force -Verbose
+ ```
+1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers**.
+1. Select the checkbox next to the machine(s) you want to delete from the hybrid worker group.
+1. Select **Delete** to remove the agent-based Windows Hybrid Worker.
-1. Open a PowerShell session in Administrator mode and run the following command with your URL and primary access key values. Use the `Verbose` parameter for a detailed log of the removal process. To remove stale machines from your Hybrid Worker group, use the optional `machineName` parameter.
+ > [!NOTE]
+ > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+ > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
-```powershell-interactive
-Remove-HybridRunbookWorker -Url <URL> -Key <primaryAccessKey> -MachineName <computerName>
-```
-> [!NOTE]
-> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
-> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
## Remove a Hybrid Worker group
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
Title: Migrate an existing agent-based hybrid workers to extension-based-workers
description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Previously updated : 03/15/2023 Last updated : 03/30/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Locati
#### [Windows Hybrid Worker](#tab/win-hrw)
-1. In the Azure portal, go to your Automation account.
+1. Open PowerShell session in Administrator mode and run the following command:
-1. Under **Account Settings**, select **Keys** and note the values for **URL** and **Primary Access Key**.
+ ```powershell-interactive
+ Remove-Item -Path "HKLM:\SOFTWARE\Microsoft\HybridRunbookWorker\<AutomationAccountID>\<HybridWorkerGroupName>" -Force -Verbose
+ ```
+1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers**.
+1. Select the checkbox next to the machine(s) you want to delete from the hybrid worker group.
+1. Select **Delete** to remove the agent-based Windows Hybrid Worker.
-1. Open a PowerShell session in Administrator mode and run the following command with your URL and primary access key values. Use the `Verbose` parameter for a detailed log of the removal process. To remove stale machines from your Hybrid Worker group, use the optional `machineName` parameter.
-
-```powershell-interactive
-Remove-HybridRunbookWorker -Url <URL> -Key <primaryAccessKey> -MachineName <computerName>
-```
-> [!NOTE]
-> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
-> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
+ > [!NOTE]
+ > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+ > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
#### [Linux Hybrid Worker](#tab/lin-hrw)
-You can use the command `ls /var/opt/microsoft/omsagent` on the Hybrid Runbook Worker to get the workspace ID. A folder is created that is named with the workspace ID.
+Run the following commands on agent-based Linux Hybrid Worker:
-```bash
-sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessKey>" --groupname="Example" --workspaceid="<workspaceId>"
-```
+1. ```python
+ sudo bash
+ ```
-> [!NOTE]
-> - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br>
-> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
-> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
+1. ```python
+ rm -r /home/nxautomation
+ ```
+1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers**.
+1. Select the checkbox next to the machine(s) you want to delete from the hybrid worker group.
+1. Select **Delete** to remove the agent-based Linux Hybrid Worker.
++
+ > [!NOTE]
+ > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br>
+ > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+ > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
As outlined in [Connectivity modes and requirements](./connectivity.md), you can
After you've installed the Azure Arc data controller, you can create and access data services such as Azure Arc-enabled SQL Managed Instance or Azure Arc-enabled PostgreSQL server.
+## Known limitations
+Currently, only one Azure Arc data controller per Kubernetes cluster is supported. However, you can create multiple Arc data services, such as Arc-enabled SQL managed instances and Arc-enabled PostgreSQL servers, that are managed by the same Azure Arc data controller.
+ ## Next steps You have several additional options for creating the Azure Arc data controller:
azure-arc Conceptual Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-workload-management.md
Title: "Workload management in a multi-cluster environment with GitOps" description: "This article provides a conceptual overview of the workload management in a multi-cluster environment with GitOps." Previously updated : 03/13/2023 Last updated : 03/29/2023
# Workload management in a multi-cluster environment with GitOps
-Developing modern cloud-native applications often includes building, deploying, configuring, and promoting workloads across a fleet of Kubernetes clusters. With the increasing diversity of Kubernetes clusters in the fleet, and the variety of applications and services, the process can become complex and unscalable. Enterprise organizations can be more successful in these efforts by having a well defined structure that organizes people and their activities, and by using automated tools.
+Developing modern cloud-native applications often includes building, deploying, configuring, and promoting workloads across a group of Kubernetes clusters. With the increasing diversity of Kubernetes cluster types, and the variety of applications and services, the process can become complex and unscalable. Enterprise organizations can be more successful in these efforts by having a well defined structure that organizes people and their activities, and by using automated tools.
This article walks you through a typical business scenario, outlining the involved personas and major challenges that organizations often face while managing cloud-native workloads in a multi-cluster environment. It also suggests an architectural pattern that can make this complex process simpler, observable, and more scalable. ## Scenario overview
-This article describes an organization that develops cloud-native applications. Any application needs a compute resource to work on. In the cloud-native world, this compute resource is a Kubernetes cluster. An organization may have a single cluster or, more commonly, multiple clusters. So the organization must decide which applications should work on which clusters. In other words, they must schedule the applications across clusters. The result of this decision, or scheduling, is a model of the desired state of their cluster fleet. Having that in place, they need somehow to deliver applications to the assigned clusters so that they can turn the desired state into the reality, or, in other words, reconcile it.
+This article describes an organization that develops cloud-native applications. Any application needs a compute resource to work on. In the cloud-native world, this compute resource is a Kubernetes cluster. An organization may have a single cluster or, more commonly, multiple clusters. So the organization must decide which applications should work on which clusters. In other words, they must schedule the applications across clusters. The result of this decision, or scheduling, is a model of the desired state of the clusters in their environment. Having that in place, they need somehow to deliver applications to the assigned clusters so that they can turn the desired state into the reality, or, in other words, reconcile it.
Every application goes through a software development lifecycle that promotes it to the production environment. For example, an application is built, deployed to Dev environment, tested and promoted to Stage environment, tested, and finally delivered to production. For a cloud-native application, the application requires and targets different Kubernetes cluster resources throughout its lifecycle. In addition, applications normally require clusters to provide some platform services, such as Prometheus and Fluentbit, and infrastructure configurations, such as networking policy.
-Depending on the application, there may be a great diversity of cluster types to which the application is deployed. The same application with different configurations could be hosted on a managed cluster in the cloud, on a connected cluster in an on-premises environment, on a fleet of clusters on semi-connected edge devices on factory lines or military drones, and on an air-gapped cluster on a starship. Another complexity is that clusters in early lifecycle stages such as Dev and QA are normally managed by the developer, while reconciliation to actual production clusters may be managed by the organization's customers. In the latter case, the developer may be responsible only for promoting and scheduling the application across different rings.
+Depending on the application, there may be a great diversity of cluster types to which the application is deployed. The same application with different configurations could be hosted on a managed cluster in the cloud, on a connected cluster in an on-premises environment, on a group of clusters on semi-connected edge devices on factory lines or military drones, and on an air-gapped cluster on a starship. Another complexity is that clusters in early lifecycle stages such as Dev and QA are normally managed by the developer, while reconciliation to actual production clusters may be managed by the organization's customers. In the latter case, the developer may be responsible only for promoting and scheduling the application across different rings.
## Challenges at scale
In a small organization with a single application and only a few operations, mos
The following capabilities are required to perform this type of workload management at scale in a multi-cluster environment: - Separation of concerns on scheduling and reconciling-- Promotion of the fleet state through a chain of environments
+- Promotion of the multi-cluster state through a chain of environments
- Sophisticated, extensible and replaceable scheduler - Flexibility to use different reconcilers for different cluster types depending on their nature and connectivity
Before we describe the scenario, let's clarify which personas are involved, what
### Platform team
-The platform team is responsible for managing the fleet of clusters that hosts applications produced by application teams.
+The platform team is responsible for managing the clusters that host applications produced by application teams.
Key responsibilities of the platform team are: * Define staging environments (Dev, QA, UAT, Prod).
-* Define cluster types in the fleet and their distribution across environments.
+* Define cluster types and their distribution across environments.
* Provision new clusters.
-* Manage infrastructure configurations across the fleet.
+* Manage infrastructure configurations across the clusters.
* Maintain platform services used by applications. * Schedule applications and platform services on the clusters.
Key responsibilities of the platform team are:
The application team is responsible for the software development lifecycle (SDLC) of their applications. They provide Kubernetes manifests that describe how to deploy the application to different targets. They're responsible for owning CI/CD pipelines that create container images and Kubernetes manifests and promote deployment artifacts across environment stages.
-Typically, the application team has no knowledge of the clusters that they are deploying to. They aren't aware of the structure of the fleet, global configurations, or tasks performed by other teams. The application team primarily understands the success of their application rollout as defined by the success of the pipeline stages.
+Typically, the application team has no knowledge of the clusters that they are deploying to. They aren't aware of the structure of the multi-cluster environment, global configurations, or tasks performed by other teams. The application team primarily understands the success of their application rollout as defined by the success of the pipeline stages.
Key responsibilities of the application team are:
Let's have a look at the high level solution architecture and understand its pri
### Control plane
-The platform team models the fleet in the control plane. It's designed to be human-oriented and easy to understand, update, and review. The control plane operates with abstractions such as Cluster Types, Environments, Workloads, Scheduling Policies, Configs and Templates. These abstractions are handled by an automated process that assigns deployment targets and configuration values to the cluster types, then saves the result to the platform GitOps repository. Although the entire fleet may consist of thousands of physical clusters, the platform repository operates at a higher level, grouping the clusters into cluster types.
+The platform team models the multi-cluster environment in the control plane. It's designed to be human-oriented and easy to understand, update, and review. The control plane operates with abstractions such as Cluster Types, Environments, Workloads, Scheduling Policies, Configs and Templates. These abstractions are handled by an automated process that assigns deployment targets and configuration values to the cluster types, then saves the result to the platform GitOps repository. Although there may be thousands of physical clusters, the platform repository operates at a higher level, grouping the clusters into cluster types.
The main requirement for the control plane storage is to provide a reliable and secure transaction processing functionality, rather than being hit with complex queries against a large amount of data. Various technologies may be used to store the control plane data.
Every cluster type can use a different reconciler (such as Flux, ArgoCD, Zarf, R
### Platform services
-Platform services are workloads (such as Prometheus, NGINX, Fluentbit, and so on) maintained by the platform team. Just like any workloads, they have their source repositories and manifests storage. The source repositories may contain pointers to external Helm charts. CI/CD pipelines pull the charts with containers and perform necessary security scans before submitting them to the manifests storage, from where they're reconciled to the clusters in the fleet.
+Platform services are workloads (such as Prometheus, NGINX, Fluentbit, and so on) maintained by the platform team. Just like any workloads, they have their source repositories and manifests storage. The source repositories may contain pointers to external Helm charts. CI/CD pipelines pull the charts with containers and perform necessary security scans before submitting them to the manifests storage, from where they're reconciled to the clusters.
### Deployment Observability Hub
-Deployment Observability Hub is a central storage that is easy to query with complex queries against a large amount of data. It contains deployment data with historical information on workload versions and their deployment state across clusters in the fleet. Clusters register themselves in the storage and update their compliance status with the GitOps repositories. Clusters operate at the level of Git commits only. High-level information, such as application versions, environments, and cluster type data, is transferred to the central storage from the GitOps repositories. This high-level information gets correlated in the central storage with the commit compliance data sent from the clusters.
+Deployment Observability Hub is a central storage that is easy to query with complex queries against a large amount of data. It contains deployment data with historical information on workload versions and their deployment state across clusters. Clusters register themselves in the storage and update their compliance status with the GitOps repositories. Clusters operate at the level of Git commits only. High-level information, such as application versions, environments, and cluster type data, is transferred to the central storage from the GitOps repositories. This high-level information gets correlated in the central storage with the commit compliance data sent from the clusters.
## Next steps
-* Explore a [sample implementation of workload management in a multi-cluster environment with GitOps](https://github.com/microsoft/kalypso).
-* Try our [Tutorial: Workload Management in Multi-cluster environment with GitOps](tutorial-workload-management.md) to walk through the implementation.
+* Walk through a sample implementation to explore [workload management in a multi-cluster environment with GitOps](workload-management.md).
+* Explore a [multi-cluster workload management sample repository](https://github.com/microsoft/kalypso).
azure-arc Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/workload-management.md
+
+ Title: 'Explore workload management in a multi-cluster environment with GitOps'
+description: Explore typical use-cases that Platform and Application teams face on a daily basis working with Kubernetes workloads in a multi-cluster environment.
+keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, ci/cd, devops"
+++ Last updated : 03/29/2023++
+# Explore workload management in a multi-cluster environment with GitOps
+
+Enterprise organizations, developing cloud native applications, face challenges to deploy, configure and promote a great variety of applications and services across multiple Kubernetes clusters at scale. This environment may include Azure Kubernetes Service (AKS) clusters, clusters running on other public cloud providers, or clusters in on-premises data centers that are connected to Azure through the Azure Arc. Refer to the [conceptual article](conceptual-workload-management.md) exploring the business process, challenges and solution architecture.
+
+This article walks you through an example scenario of the workload deployment and configuration in a multi-cluster Kubernetes environment. First, you deploy a sample infrastructure with a few GitHub repositories and AKS clusters. Next, you work through a set of use cases where you act as different personas working in the same environment: the Platform Team and the Application Team.
+
+## Prerequisites
+
+In order to successfully deploy the sample, you need:
+
+- An Azure subscription. If you don't already have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli)
+- [GitHub CLI](https://cli.github.com)
+- [Helm](https://helm.sh/docs/helm/helm_install/)
+- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
+- [jq](https://stedolan.github.io/jq/download/)
+- GitHub token with the following scopes: `repo`, `workflow`, `write:packages`, `delete:packages`, `read:org`, `delete_repo`.
+
+## 1 - Deploy the sample
+
+To deploy the sample, run the following script:
+
+```bash
+mkdir kalypso && cd kalypso
+curl -fsSL -o deploy.sh https://raw.githubusercontent.com/microsoft/kalypso/main/deploy/deploy.sh
+chmod 700 deploy.sh
+./deploy.sh -c -p <prefix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+```
+
+This script may take 10-15 minutes to complete. After it's done, it reports the execution result in the output like this:
+
+```output
+Deployment is complete!
+
+Created repositories:
+ - https://github.com/eedorenko/kalypso-control-plane
+ - https://github.com/eedorenko/kalypso-gitops
+ - https://github.com/eedorenko/kalypso-app-src
+ - https://github.com/eedorenko/kalypso-app-gitops
+
+Created AKS clusters in kalypso-rg resource group:
+ - control-plane
+ - drone (Flux based workload cluster)
+ - large (ArgoCD based workload cluster)
+
+```
+
+> [!NOTE]
+> If something goes wrong with the deployment, you can delete the created resources with the following command:
+>
+> ```bash
+> ./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+> ```
+
+### Sample overview
+
+This deployment script created an infrastructure, shown in the following diagram:
++
+There are a few Platform Team repositories:
+
+- [Control Plane](https://github.com/microsoft/kalypso-control-plane): Contains a platform model defined with high level abstractions such as environments, cluster types, applications and services, mapping rules and configurations, and promotion workflows.
+- [Platform GitOps](https://github.com/microsoft/kalypso-gitops): Contains final manifests that represent the topology of the multi-cluster environment, such as which cluster types are available in each environment, what workloads are scheduled on them, and what platform configuration values are set.
+- [Services Source](https://github.com/microsoft/kalypso-svc-src): Contains high-level manifest templates of sample dial-tone platform services.
+- [Services GitOps](https://github.com/microsoft/kalypso-svc-gitops): Contains final manifests of sample dial-tone platform services to be deployed across the clusters.
+
+The infrastructure also includes a couple of the Application Team repositories:
+
+- [Application Source](https://github.com/microsoft/kalypso-app-src): Contains a sample application source code, including Docker files, manifest templates and CI/CD workflows.
+- [Application GitOps](https://github.com/microsoft/kalypso-app-gitops): Contains final sample application manifests to be deployed to the deployment targets.
+
+The script created the following Azure Kubernetes Service (AKS) clusters:
+
+- `control-plane` - This cluster is a management cluster that doesn't run any workloads. The `control-plane` cluster hosts [Kalypso Scheduler](https://github.com/microsoft/kalypso-scheduler) operator that transforms high level abstractions from the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repository to the raw Kubernetes manifests in the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository.
+- `drone` - A sample workload cluster. This cluster has the [GitOps extension](conceptual-gitops-flux2.md) installed and it uses `Flux` to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository. For this sample, the `drone` cluster can represent an Azure Arc-enabled cluster or an AKS cluster with the Flux/GitOps extension.
+- `large` - A sample workload cluster. This cluster has `ArgoCD` installed on it to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository.
+
+### Explore Control Plane
+
+The `control plane` repository contains three branches: `main`, `dev` and `stage`. The `dev` and `stage` branches contain configurations that are specific for `Dev` and `Stage` environments. On the other hand, the `main` branch doesn't represent any specific environment. The content of the `main` branch is common and used by all environments. Any change to the `main` branch is a subject to be promoted across environments. For example, a new application or a new template can be promoted to the `Stage` environment only after successful testing on the `Dev` environment.
+
+The `main` branch:
+
+|Folder|Description|
+||--|
+|.github/workflows| Contains GitHub workflows that implement the promotional flow.|
+|.environments| Contains a list of environments with pointers to the branches with the environment configurations.|
+|templates| Contains manifest templates for various reconcilers (for example, Flux and ArgoCD) and a template for the workload namespace.|
+|workloads| Contains a list of onboarded applications and services with pointers to the corresponding GitOps repositories.|
+
+The `dev` and `stage` branches:
+
+|Item|Description|
+|-|--|
+|cluster-types| Contains a list of available cluster types in the environment. The cluster types are grouped in custom subfolders. Each cluster type is marked with a set of labels. It specifies a reconciler type that it uses to fetch the manifests from GitOps repositories. The subfolders also contain a number of config maps with the platform configuration values available on the cluster types.|
+|configs/dev-config.yaml| Contains config maps with the platform configuration values applicable for all cluster types in the environment.|
+|scheduling| Contains scheduling policies that map workload deployment targets to the cluster types in the environment.|
+|base-repo.yaml| A pointer to the place in the `Control Plane` repository (`main`) from where the scheduler should take templates and workload registrations.|
+|gitops-repo.yaml| A pointer to the place in the `Platform GitOps` repository to where the scheduler should PR generated manifests.|
+
+> [!TIP]
+> The folder structure in the `Control Plane` repository doesn't really matter. This example provides one way of organizing files in the repository, but feel free to do it in your own preferred way. The scheduler is interested in the content of the files, rather than where the files are located.
+
+## 2 - Platform Team: Onboard a new application
+
+The Application Team runs their software development lifecycle. They build their application and promote it across environments. They're not aware of what cluster types are available and where their application will be deployed. But they do know that they want to deploy their application in `Dev` environment for functional and performance testing and in `Stage` environment for UAT testing.
+
+The Application Team describes this intention in the [workload](https://github.com/microsoft/kalypso-app-src/blob/main/workload/workload.yaml) file in the [Application Source](https://github.com/microsoft/kalypso-app-src) repository:
+
+```yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: Workload
+metadata:
+ name: hello-world-app
+ labels:
+ type: application
+ family: force
+spec:
+ deploymentTargets:
+ - name: functional-test
+ labels:
+ purpose: functional-test
+ edge: "true"
+ environment: dev
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: dev
+ path: ./functional-test
+ - name: performance-test
+ labels:
+ purpose: performance-test
+ edge: "false"
+ environment: dev
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: dev
+ path: ./performance-test
+ - name: uat-test
+ labels:
+ purpose: uat-test
+ environment: stage
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: stage
+ path: ./uat-test
+```
+
+This file contains a list of three deployment targets. These targets are marked with custom labels and point to the folders in [Application GitOps](https://github.com/microsoft/kalypso-app-gitops) repository where the Application Team generates application manifests for each deployment target.
+
+With this file, Application Team requests Kubernetes compute resources from the Platform Team. In response, the Platform Team must register the application in the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repo.
+
+To register the application, open a terminal and use the following script:
+
+```bash
+export org=<github org>
+export prefix=<prefix>
+
+# clone the control-plane repo
+git clone https://github.com/$org/$prefix-control-plane control-plane
+cd control-plane
+
+# create workload registration file
+
+cat <<EOF >workloads/hello-world-app.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: WorkloadRegistration
+metadata:
+ name: hello-world-app
+ labels:
+ type: application
+spec:
+ workload:
+ repo: https://github.com/$org/$prefix-app-src
+ branch: main
+ path: workload/
+ workspace: kaizen-app-team
+EOF
+
+git add .
+git commit -m 'workload registration'
+git push
+```
+
+> [!NOTE]
+> For simplicity, this example pushes changes directly to `main`. In practice, you'd create a pull request to submit the changes.
+
+With that in place, the application is onboarded in the control plane. But the control plane still doesn't know how to map the application deployment targets to all of the cluster types.
+
+### Define application scheduling policy on Dev
+
+The Platform Team must define how the application deployment targets will be scheduled on cluster types in the `Dev` environment. To do this, submit scheduling policies for the `functional-test` and `performance-test` deployment targets with the following script:
+
+```bash
+# Switch to dev branch (representing Dev environemnt) in the control-plane folder
+git checkout dev
+mkdir -p scheduling/kaizen
+
+# Create a scheduling policy for the functional-test deployment target
+cat <<EOF >scheduling/kaizen/functional-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: functional-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: functional-test
+ edge: "true"
+ clusterTypeSelector:
+ labelSelector:
+ matchLabels:
+ restricted: "true"
+ edge: "true"
+EOF
+
+# Create a scheduling policy for the performance-test deployment target
+cat <<EOF >scheduling/kaizen/performance-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: performance-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: performance-test
+ edge: "false"
+ clusterTypeSelector:
+ labelSelector:
+ matchLabels:
+ size: large
+EOF
+
+git add .
+git commit -m 'application scheduling policies'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The first policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: functional-test` and `edge: "true"` should be scheduled on all environment cluster types that are marked with label `restricted: "true"`. You can treat a workspace as a group of applications produced by an application team.
+
+The second policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: performance-test` and `edge: "false"` should be scheduled on all environment cluster types that are marked with label `size: "large"`.
+
+This push to the `dev` branch triggers the scheduling process and creates a PR to the `dev` branch in the `Platform GitOps` repository:
++
+Besides `Promoted_Commit_id`, which is just tracking information for the promotion CD flow, the PR contains assignment manifests. The `functional-test` deployment target is assigned to the `drone` cluster type, and the `performance-test` deployment target is assigned to the `large` cluster type. Those manifests will land in `drone` and `large` folders that contain all assignments to these cluster types in the `Dev` environment.
+
+The `Dev` environment also includes `command-center` and `small` cluster types:
+
+ :::image type="content" source="media/workload-management/dev-cluster-types.png" alt-text="Screenshot showing cluster types in the Dev environment.":::
+
+However, only the `drone` and `large` cluster types were selected by the scheduling policies that you defined.
+
+### Understand deployment target assignment manifests
+
+Before you continue, take a closer look at the generated assignment manifests for the `functional-test` deployment target. There are `namespace.yaml`, `config.yaml` and `reconciler.yaml` manifest files.
+
+`namespace.yaml` defines a namespace that will be created on any `drone` cluster where the `hello-world` application runs.
+
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ labels:
+ deploymentTarget: hello-world-app-functional-test
+ environment: dev
+ someLabel: some-value
+ workload: hello-world-app
+ workspace: kaizen-app-team
+ name: dev-kaizen-app-team-hello-world-app-functional-test
+```
+
+`config.yaml` contains all platform configuration values available on any `drone` cluster that the application can use in the `Dev` environment.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: platform-config
+ namespace: dev-kaizen-app-team-hello-world-app-functional-test
+data:
+ CLUSTER_NAME: Drone
+ DATABASE_URL: mysql://restricted-host:3306/mysqlrty123
+ ENVIRONMENT: Dev
+ REGION: East US
+ SOME_COMMON_ENVIRONMENT_VARIABLE: "false"
+```
+
+`reconciler.yaml` contains Flux resources that a `drone` cluster uses to fetch application manifests, prepared by the Application Team for the `functional-test` deployment target.
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta2
+kind: GitRepository
+metadata:
+ name: hello-world-app-functional-test
+ namespace: flux-system
+spec:
+ interval: 30s
+ ref:
+ branch: dev
+ secretRef:
+ name: repo-secret
+ url: https://github.com/<github org>/<prefix>-app-gitops
+
+apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
+kind: Kustomization
+metadata:
+ name: hello-world-app-functional-test
+ namespace: flux-system
+spec:
+ interval: 30s
+ path: ./functional-test
+ prune: true
+ sourceRef:
+ kind: GitRepository
+ name: hello-world-app-functional-test
+ targetNamespace: dev-kaizen-app-team-hello-world-app-functional-test
+```
+
+> [!NOTE]
+> The `control plane` defines that the `drone` cluster type uses `Flux` to reconcile manifests from the application GitOps repositories. The `large` cluster type, on the other hand, reconciles manifests with `ArgoCD`. Therefore `reconciler.yaml` for the `performance-test` deployment target will look differently and contain `ArgoCD` resources.
+
+### Promote application to Stage
+
+Once you approve and merge the PR to the `Platform GitOps` repository, the `drone` and `large` AKS clusters that represent corresponding cluster types start fetching the assignment manifests. The `drone` cluster has [GitOps extension](conceptual-gitops-flux2.md) installed, pointing to the `Platform GitOps` repository. It reports its `compliance` status to Azure Resource Graph:
++
+The PR merging event starts a GitHub workflow `checkpromote` in the `control plane` repository. This workflow waits until all clusters with the [GitOps extension](conceptual-gitops-flux2.md) installed that are looking at the `dev` branch in the `Platform GitOps` repository are compliant with the PR commit. In this example, the only such cluster is `drone`.
++
+Once the `checkpromote` is successful, it starts the `cd` workflow that promotes the change (application registration) to the `Stage` environment. For better visibility, it also updates the git commit status in the `control plane` repository:
++
+> [!NOTE]
+> If the `drone` cluster fails to reconcile the assignment manifests for any reason, the promotion flow will fail. The commit status will be marked as failed, and the application registration will not be promoted to the `Stage` environment.
+
+Next, configure a scheduling policy for the `uat-test` deployment target in the stage environment:
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+mkdir -p scheduling/kaizen
+
+# Create a scheduling policy for the uat-test deployment target
+cat <<EOF >scheduling/kaizen/uat-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: uat-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: uat-test
+ clusterTypeSelector:
+ labelSelector: {}
+EOF
+
+git add .
+git commit -m 'application scheduling policies'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The policy states that all deployment targets from the `kaizen-app-team` workspace marked with labels `purpose: uat-test` should be scheduled on all cluster types defined in the environment.
+
+Pushing this policy to the `stage` branch triggers the scheduling process, which creates a PR with the assignment manifests to the `Platform GitOps` repository, similar to those for the `Dev` environment.
+
+As in the case with the `Dev` environment, after reviewing and merging the PR to the `Platform GitOps` repository, the `checkpromote` workflow in the `control plane` repository waits until clusters with the [GitOps extension](conceptual-gitops-flux2.md) (`drone`) reconcile the assignment manifests.
+
+ :::image type="content" source="media/workload-management/check-promote-to-stage.png" alt-text="Screenshot showing promotion to stage.":::
+
+On successful execution, the commit status is updated.
++
+## 3 - Application Dev Team: Build and deploy application
+
+The Application Team regularly submits pull requests to the `main` branch in the `Application Source` repository. Once a PR is merged to `main`, it starts a CI/CD workflow. Here, the workflow will be started manually.
+
+ Go to the `Application Source` repository in GitHub. On the `Actions` tab, select `Run workflow`.
++
+The workflow performs the following actions:
+
+- Builds the application Docker image and pushes it to the GitHub repository package.
+- Generates manifests for the `functional-test` and `performance-test` deployment targets. It uses configuration values from the `dev-configs` branch. The generated manifests are added to a pull request and auto-merged in the `dev` branch.
+- Generates manifests for the `uat-test` deployment target. It uses configuration values from the `stage-configs` branch.
++
+The generated manifests are added to a pull request to the `stage` branch waiting for approval:
++
+To test the application manually on the `Dev` environment before approving the PR to the `Stage` environment, first verify how the `functional-test` application instance works on the `drone` cluster:
+
+```bash
+kubectl port-forward svc/hello-world-service -n dev-kaizen-app-team-hello-world-app-functional-test 9090:9090 --context=drone
+
+# output:
+# Forwarding from 127.0.0.1:9090 -> 9090
+# Forwarding from [::1]:9090 -> 9090
+
+```
+
+While this command is running, open `localhost:9090` in your browser. You'll see the following greeting page:
++
+The next step is to check how the `performance-test` instance works on the `large` cluster:
+
+```bash
+kubectl port-forward svc/hello-world-service -n dev-kaizen-app-team-hello-world-app-performance-test 8080:8080 --context=large
+
+# output:
+# Forwarding from 127.0.0.1:8080 -> 8080
+# Forwarding from [::1]:8080 -> 8080
+
+```
+
+This time, use `8080` port and open `localhost:8080` in your browser.
+
+Once you're satisfied with the `Dev` environment, approve and merge the PR to the `Stage` environment. After that, test the `uat-test` application instance in the `Stage` environment on both clusters.
+
+Run the following command for the `drone` cluster and open `localhost:8001` in your browser:
+
+```bash
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8001:8000 --context=drone
+```
+
+Run the following command for the `large` cluster and open `localhost:8002` in your browser:
+
+ ```bash
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8002:8000 --context=large
+```
+
+> [!NOTE]
+> It may take up to three minutes to reconcile the changes from the application GitOps repository on the `large` cluster.
+
+The application instance on the `large` cluster shows the following greeting page:
+
+ :::image type="content" source="media/workload-management/stage-greeting-page.png" alt-text="Screenshot showing the greeting page on stage.":::
+
+## 4 - Platform Team: Provide platform configurations
+
+Applications in the clusters grab the data from the very same database in both `Dev` and `Stage` environments. Let's change it and configure `west-us` clusters to provide a different database url for the applications working in the `Stage` environment:
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+
+# Update a config map with the configurations for west-us clusters
+cat <<EOF >cluster-types/west-us/west-us-config.yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: west-us-config
+ labels:
+ platform-config: "true"
+ region: west-us
+data:
+ REGION: West US
+ DATABASE_URL: mysql://west-stage:8806/mysql2
+EOF
+
+git add .
+git commit -m 'database url configuration'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The scheduler scans all config maps in the environment and collects values for each cluster type based on label matching. Then, it puts a `platform-config` config map in every deployment target folder in the `Platform GitOps` repository. The `platform-config` config map contains all of the platform configuration values that the workload can use on this cluster type in this environment.
+
+In a few seconds, a new PR to the `stage` branch in the `Platform GitOps` repository appears:
++
+Approve the PR and merge it.
+
+The `large` cluster is handled by ArgoCD, which, by default, is configured to reconcile every three minutes. This cluster doesn't report its compliance state to Azure like the clusters such as `drone` that have the [GitOps extension](conceptual-gitops-flux2.md). However, you can still monitor the reconciliation state on the cluster with ArgoCD UI.
+
+To access the ArgoCD UI on the `large` cluster, run the following command:
+
+```bash
+# Get ArgoCD username and password
+echo "ArgoCD username: admin, password: $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" --context large| base64 -d)"
+# output:
+# ArgoCD username: admin, password: eCllTELZdIZfApPL
+
+kubectl port-forward svc/argocd-server 8080:80 -n argocd --context large
+```
+
+Next, open `localhost:8080` in your browser and provide the username and password printed by the script. You'll see a web page similar to this one:
+
+ :::image type="content" source="media/workload-management/argocd-ui.png" alt-text="Screenshot showing the Argo CD user interface web page." lightbox="media/workload-management/argocd-ui.png":::
+
+Select the `stage` tile to see more details on the reconciliation state from the `stage` branch to this cluster. You can select the `SYNC` buttons to force the reconciliation and speed up the process.
+
+Once the new configuration has arrived to the cluster, check the `uat-test` application instance at `localhost:8002` after
+running the following commands:
+
+```bash
+kubectl rollout restart deployment hello-world-deployment -n stage-kaizen-app-team-hello-world-app-uat-test --context=large
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8002:8000 --context=large
+```
+
+You'll see the updated database url:
++
+## 5 - Platform Team: Add cluster type to environment
+
+Currently, only `drone` and `large` cluster types are included in the `Stage` environment. Let's include the `small` cluster type to `Stage` as well. Even though there's no physical cluster representing this cluster type, you can see how the scheduler reacts to this change.
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+
+# Add "small" cluster type in west-us region
+mkdir -p cluster-types/west-us/small
+cat <<EOF >cluster-types/west-us/small/small-cluster-type.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: ClusterType
+metadata:
+ name: small
+ labels:
+ region: west-us
+ size: small
+spec:
+ reconciler: argocd
+ namespaceService: default
+EOF
+
+git add .
+git commit -m 'add new cluster type'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+In a few seconds, the scheduler submits a PR to the `Platform GitOps` repository. According to the `uat-test-policy` that you created, it assigns the `uat-test` deployment target to the new cluster type, as it's supposed to work on all available cluster types in the environment.
++
+## Clean up resources
+When no longer needed, delete the resources that you created. To do so, run the following command:
+
+```bash
+# In kalypso folder
+./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+```
+
+## Next steps
+
+You have performed tasks for a few common workload management scenarios in a multi-cluster Kubernetes environment. There are many other scenarios you may want to explore. Continue to use the sample and see how you can implement use cases that are most common in your daily activities.
+
+To understand the underlying concepts and mechanics deeper, refer to the following resources:
+
+> [!div class="nextstepaction"]
+> - [Concept: Workload Management in Multi-cluster environment with GitOps](conceptual-workload-management.md)
+> - [Sample implementation: Workload Management in Multi-cluster environment with GitOps](https://github.com/microsoft/kalypso)
+
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
If you're only planning to use Private Links to support a few machines or server
#### Linux
-1. Using an account with the **sudoers** privilege, run `sudo nano /etc/hosts` to open the hosts file.
+1. Open the `/etc/hosts` hosts file in a text editor.
1. Add the private endpoint IPs and hostnames as shown in the table from step 3 under [Manual DNS server configuration](#manual-dns-server-configuration). The hosts file asks for the IP address first followed by a space and then the hostname.
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
Title: How to configure Azure Functions with a virtual network description: Article that shows you how to perform certain virtual networking tasks for Azure Functions.- Previously updated : 03/04/2022+ Last updated : 03/24/2023 # How to configure Azure Functions with a virtual network
-This article shows you how to perform tasks related to configuring your function app to connect to and run on a virtual network. For an in-depth tutorial on how to secure your storage account, please refer to the [Connect to a Virtual Network tutorial](functions-create-vnet.md). To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
+This article shows you how to perform tasks related to configuring your function app to connect to and run on a virtual network. For an in-depth tutorial on how to secure your storage account, refer to the [Connect to a Virtual Network tutorial](functions-create-vnet.md). To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
## Restrict your storage account to a virtual network
-When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints. When configuring your storage account with private endpoints, public access to your storage account is not automatically disabled. In order to disable public access to your storage account, configure your storage firewall to allow access from only selected networks.
+When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can secure a new storage account behind a virtual network during account creation. At this time, you can't secure an existing storage account being used by your function app in the same way.
+> [!NOTE]
+> Securing your storage account is supported for all tiers in both Dedicated (App Service) and Elastic Premium plans. Consumption plans currently don't support virtual networks.
+### During function app creation
-> [!NOTE]
-> This feature currently works for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for Windows Elastic Premium plans. Consumption tier isn't supported.
+You can create a new function app along with a new storage account secured behind a virtual network. The following links show you how to create these resources by using either the Azure portal or by using deployment templates:
+
+# [Azure portal](#tab/portal)
+
+Complete the following tutorial to create a new function app a secured storage account: [Use private endpoints to integrate Azure Functions with a virtual network](functions-create-vnet.md).
+
+# [Deployment templates](#tab/templates)
+
+Use Bicep or Azure Resource Manager (ARM) [quickstart templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints) to create secured function app and storage account resources.
+++
+### Existing function app
+
+When you have an existing function app, you can't secure the storage account currently being used by the app. You must instead swap-out the existing storage account for a new, secured storage account.
+
+To secure the storage for an existing function app:
+
+1. Choose a function app with a storage account that doesn't have service endpoints or private endpoints enabled.
-To set up a function with a storage account restricted to a private network:
+1. [Enable virtual network integration](./functions-networking-options.md#enable-virtual-network-integration) for your function app.
-1. Create a function with a storage account that does not have service endpoints enabled.
+1. Create or configure a second storage account. This is going to be the secured storage account that your function app uses instead.
-1. Configure the function to connect to your virtual network.
+1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the new storage account.
-1. Create or configure a different storage account. This will be the storage account we secure with service endpoints and connect our function.
+1. Secure the new storage account in one of the following ways:
-1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the secured storage account.
+ * [Create a private endpoint](../storage/common/storage-private-endpoints.md#creating-a-private-endpoint). When using private endpoint connections, the storage account must have private endpoints for the `file` and `blob` subresources. For Durable Functions, you must also make `queue` and `table` subresources accessible through private endpoints.
-1. Enable service endpoints or private endpoint for the storage account.
- * If using private endpoint connections, the storage account will need a private endpoint for the `file` and `blob` sub-resources. If using certain capabilities like Durable Functions, you will also need `queue` and `table` accessible through a private endpoint connection.
- * If using service endpoints, enable the subnet dedicated to your function apps for storage accounts.
+ * [Enable a service endpoint from the virtual network](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). When using service endpoints, enable the subnet dedicated to your function apps for storage accounts on the firewall.
-1. Copy the file and blob content from the function app storage account to the secured storage account and file share.
+1. Copy the file and blob content from the current storage account used by the function app to the newly secured storage account and file share.
1. Copy the connection string for this storage account.
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md
Title: Bindings for Durable Functions - Azure description: How to use triggers and bindings for the Durable Functions extension for Azure Functions. Previously updated : 12/07/2022 Last updated : 03/22/2023
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Bindings for Durable Functions (Azure Functions) The [Durable Functions](durable-functions-overview.md) extension introduces three trigger bindings that control the execution of orchestrator, entity, and activity functions. It also introduces an output binding that acts as a client for the Durable Functions runtime.
+Make sure to choose your Durable Functions development language at the top of the article.
++
+> [!IMPORTANT]
+> This article supports both Python v1 and Python v2 programming models for Durable Functions.
+> The Python v2 programming model is currently in preview.
+
+## Python v2 programming model
+
+Durable Functions provides preview support of the new [Python v2 programming model](../functions-reference-python.md?pivots=python-mode-decorators). To use the v2 model, you must install the Durable Functions SDK, which is the PyPI package `azure-functions-durable`, version `1.2.2` or a later version. During the preview, you can provide feedback and suggestions in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python/issues).
+
+Using [Extension Bundles](../functions-bindings-register.md#extension-bundles) isn't currently supported for the v2 model with Durable Functions. You'll instead need to manage your extensions manually as follows:
+
+1. Remove the `extensionBundle` section of your `host.json` as described in [this Functions article](../functions-run-local.md#install-extensions).
+
+1. Run the `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` command on your terminal. This installs the Durable Functions extension for your app, which allows you to use the v2 model preview.
++ ## Orchestration trigger The orchestration trigger enables you to author [durable orchestrator functions](durable-functions-types-features-overview.md#orchestrator-functions). This trigger executes when a new orchestration instance is scheduled and when an existing orchestration instance receives an event. Examples of events that can trigger orchestrator functions include durable timer expirations, activity function responses, and events raised by external clients.
-When you author functions in .NET, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute. For Java, the `@DurableOrchestrationTrigger` annotation is used.
+When you author functions in .NET, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute.
+For Java, the `@DurableOrchestrationTrigger` annotation is used to configure the orchestration trigger.
+When you write orchestrator functions, the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
-When you write orchestrator functions in scripting languages, like JavaScript, Python, or PowerShell, the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "orchestration": "<Optional - name of the orchestration>",
+ "type": "orchestrationTrigger",
+ "direction": "in"
+}
+```
+
+* `orchestration` is the name of the orchestration that clients must use when they want to start new instances of this orchestrator function. This property is optional. If not specified, the name of the function is used.
+Azure Functions supports two programming models for Python. The way that you define an orchestration trigger depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define an orchestration trigger using the `orchestration_trigger` decorator directly in your Python function code.
+
+In the v2 model, the Durable Functions triggers and bindings are accessed from an instance of `DFApp`, which is a subclass of `FunctionApp` that additionally exports Durable Functions-specific decorators.
+
+# [v1](#tab/python-v1)
+When you write orchestrator functions in the Python v1 programming model, the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
```json {
When you write orchestrator functions in scripting languages, like JavaScript, P
* `orchestration` is the name of the orchestration that clients must use when they want to start new instances of this orchestrator function. This property is optional. If not specified, the name of the function is used. +++ Internally, this trigger binding polls the configured durable store for new orchestration events, such as orchestration start events, durable timer expiration events, activity function response events, and external events raised by other functions. ### Trigger behavior
Here are some notes about the orchestration trigger:
> [!WARNING] > Orchestrator functions should never use any input or output bindings other than the orchestration trigger binding. Doing so has the potential to cause problems with the Durable Task extension because those bindings may not obey the single-threading and I/O rules. If you'd like to use other bindings, add them to an activity function called from your orchestrator function. For more information about coding constraints for orchestrator functions, see the [Orchestrator function code constraints](durable-functions-code-constraints.md) documentation. > [!WARNING]
-> JavaScript and Python orchestrator functions should never be declared `async`.
+> Orchestrator functions should never be declared `async`.
### Trigger usage
The orchestration trigger binding supports both inputs and outputs. Here are som
The following example code shows what the simplest "Hello World" orchestrator function might look like. Note that this example orchestrator doesn't actually schedule any tasks.
-# [C# (InProc)](#tab/csharp-inproc)
+The specific attribute used to define the trigger depends on whether you are running your C# functions [in-process](../functions-dotnet-class-library.md) or in an [isolated worker process](../dotnet-isolated-process-guide.md).
+
+# [In-process](#tab/in-process)
```csharp [FunctionName("HelloWorld")]
public static string Run([OrchestrationTrigger] IDurableOrchestrationContext con
> [!NOTE] > The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `DurableOrchestrationContext` instead of `IDurableOrchestrationContext`. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [Isolated process](#tab/isolated-process)
```csharp [Function("HelloWorld")]
public static string Run([OrchestrationTrigger] TaskOrchestrationContext context
> [!NOTE] > In both Durable functions in-proc and in .NET-isolated, the orchestration input can be extracted via `context.GetInput<T>()`. However, .NET-isolated also supports the input being supplied as a parameter, as shown above. The input binding will bind to the first parameter which has no binding attribute on it and is not a well-known type already covered by other input bindings (ie: `FunctionContext`).
-# [JavaScript](#tab/javascript)
++ ```javascript const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
> [!NOTE] > The `durable-functions` library takes care of calling the synchronous `context.done` method when the generator function exits.
+# [v2](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
-# [Python](#tab/python)
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+@myApp.orchestration_trigger(context_name="context")
+def my_orchestrator(context):
+ result = yield context.call_activity("Hello", "Tokyo")
+ return result
+```
+
+# [v1](#tab/python-v1)
```python import azure.durable_functions as df
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```-
-# [PowerShell](#tab/powershell)
```powershell param($Context)
param($Context)
$input = $Context.Input $input ```-
-# [Java](#tab/java)
```java @FunctionName("HelloWorldOrchestration")
public String helloWorldOrchestration(
return String.format("Hello %s!", ctx.getInput(String.class)); } ```- Most orchestrator functions call activity functions, so here is a "Hello World" example that demonstrates how to call an activity function:-
-# [C# (InProc)](#tab/csharp-inproc)
+# [In-process](#tab/in-process)
```csharp [FunctionName("HelloWorld")]
public static async Task<string> Run(
> [!NOTE] > The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `DurableOrchestrationContext` instead of `IDurableOrchestrationContext`. For more information about the differences between versions, see the [Durable Functions versions](durable-functions-versions.md) article.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [Isolated process](#tab/isolated-process)
```csharp [Function("HelloWorld")]
public static async Task<string> Run(
} ```
-# [JavaScript](#tab/javascript)
++ ```javascript const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
return result; }); ```-
-# [Python](#tab/python)
-
-```python
-import azure.durable_functions as df
-
-def orchestrator_function(context: df.DurableOrchestrationContext):
- input = context.get_input()
- result = yield context.call_activity('SayHello', input['name'])
- return result
-
-main = df.Orchestrator.create(orchestrator_function)
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-param($Context)
-
-$name = $Context.Input.Name
-
-$output = Invoke-DurableActivity -FunctionName 'SayHello' -Input $name
-
-$output
-```
-
-# [Java](#tab/java)
```java @FunctionName("HelloWorld")
public String helloWorldOrchestration(
return result; } ```-- ## Activity trigger The activity trigger enables you to author functions that are called by orchestrator functions, known as [activity functions](durable-functions-types-features-overview.md#activity-functions).
-If you're authoring functions in .NET, the activity trigger is configured using the [ActivityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.activitytriggerattribute) .NET attribute. For Java, the `@DurableActivityTrigger` annotation is used.
+The activity trigger is configured using the [ActivityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.activitytriggerattribute) .NET attribute.
+The activity trigger is configured using the `@DurableActivityTrigger` annotation.
+The activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "activity": "<Optional - name of the activity>",
+ "type": "activityTrigger",
+ "direction": "in"
+}
+```
+
+* `activity` is the name of the activity. This value is the name that orchestrator functions use to invoke this activity function. This property is optional. If not specified, the name of the function is used.
+The way that you define an activity trigger depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+Using the `activity_trigger` decorator directly in your Python function code.
-If you're using JavaScript, Python, or PowerShell, the activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+# [v1](#tab/python-v1)
+The activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using JavaScript, Python, or PowerShell, the activity trigger is defin
* `activity` is the name of the activity. This value is the name that orchestrator functions use to invoke this activity function. This property is optional. If not specified, the name of the function is used. ++ Internally, this trigger binding polls the configured durable store for new activity execution events. ### Trigger behavior
The activity trigger binding supports both inputs and outputs, just like the orc
### Trigger sample
-The following example code shows what a simple `SayHello` activity function might look like:
+The following example code shows what a simple `SayHello` activity function might look like.
-# [C# (InProc)](#tab/csharp-inproc)
+# [In-process](#tab/in-process)
```csharp [FunctionName("SayHello")]
public static string SayHello([ActivityTrigger] string name)
} ```
-# [C# (Isolated)](#tab/csharp-isolated)
+# [Isolated process](#tab/isolated-process)
In the .NET-isolated worker, only serializable types representing your input are supported for the `[ActivityTrigger]`.
public static string SayHello([ActivityTrigger] string name)
} ```
-# [JavaScript](#tab/javascript)
+ ```javascript module.exports = async function(context) { return `Hello ${context.bindings.name}!`;
module.exports = async function(context, name) {
}; ```
-# [Python](#tab/python)
+
+# [v2](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.activity_trigger(input_name="myInput")
+def my_activity(myInput: str):
+ return "Hello " + myInput
+```
+
+# [v1](#tab/python-v1)
```python def main(name: str) -> str: return f"Hello {name}!" ```
-# [PowerShell](#tab/powershell)
++ ```powershell param($name) "Hello $name!" ```-
-# [Java](#tab/java)
- ```java @FunctionName("SayHello") public String sayHello(@DurableActivityTrigger(name = "name") String name) { return String.format("Hello %s!", name); } ```-- ### Using input and output bindings
-You can use regular input and output bindings in addition to the activity trigger binding. For example, you can take the input to your activity binding, and send a message to an EventHub using the EventHub output binding:
+You can use regular input and output bindings in addition to the activity trigger binding.
+
+For example, you can take the input to your activity binding, and send a message to an EventHub using the EventHub output binding:
```json {
module.exports = async function (context) {
context.bindings.outputEventHubMessage = context.bindings.message; }; ``` ## Orchestration client
The orchestration client binding enables you to write functions that interact wi
* Send events to them while they're running. * Purge instance history.
-If you're using .NET, you can bind to the orchestration client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) attribute ([OrchestrationClientAttribute](/dotnet/api/microsoft.azure.webjobs.orchestrationclientattribute?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x). For Java, use the `@DurableClientInput` annotation.
+You can bind to the orchestration client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) attribute ([OrchestrationClientAttribute](/dotnet/api/microsoft.azure.webjobs.orchestrationclientattribute?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x).
+You can bind to the orchestration client by using the `@DurableClientInput` annotation.
+The durable client trigger is defined by the following JSON object in the `bindings` array of *function.json*:
-If you're using scripting languages, like JavaScript, Python, or PowerShell, the durable client trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "taskHub": "<Optional - name of the task hub>",
+ "connectionName": "<Optional - name of the connection string app setting>",
+ "type": "orchestrationClient",
+ "direction": "in"
+}
+```
+
+* `taskHub` - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from `host.json` is used. This value must match the value used by the target orchestrator functions.
+* `connectionName` - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target orchestrator functions. If not specified, the default storage account connection string for the function app is used.
+
+> [!NOTE]
+> In most cases, we recommend that you omit these properties and rely on the default behavior.
+The way that you define a durable client trigger depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+Using the `durable_client_input` decorator directly in your Python function code.
+
+# [v1](#tab/python-v1)
+The durable client trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using scripting languages, like JavaScript, Python, or PowerShell, the
> [!NOTE] > In most cases, we recommend that you omit these properties and rely on the default behavior. ++ ### Client usage
-In .NET functions, you typically bind to [IDurableClient](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableclient) ([DurableOrchestrationClient](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclient?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x), which gives you full access to all orchestration client APIs supported by Durable Functions. For Java, you bind to the `DurableClientContext` class. In other languages, you must use the language-specific SDK to get access to a client object.
+You typically bind to [IDurableClient](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableclient) ([DurableOrchestrationClient](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclient?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x), which gives you full access to all orchestration client APIs supported by Durable Functions.
+You typically bind to the `DurableClientContext` class.
+You must use the language-specific SDK to get access to a client object.
Here's an example queue-triggered function that starts a "HelloWorld" orchestration.
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [In-process](#tab/in-process)
```csharp [FunctionName("QueueStart")]
public static Task Run(
> [!NOTE] > The previous C# code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `OrchestrationClient` attribute instead of the `DurableClient` attribute, and you must use the `DurableOrchestrationClient` parameter type instead of `IDurableOrchestrationClient`. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [Isolated process](#tab/isolated-process)
```csharp [Function("QueueStart")]
public static Task Run(
} ```
-# [JavaScript](#tab/javascript)
++ **function.json** ```json
public static Task Run(
} ``` **index.js** ```javascript const df = require("durable-functions");
module.exports = async function (context) {
return instanceId = await client.startNew("HelloWorld", undefined, context.bindings.input); }; ```+
+**run.ps1**
+```powershell
+param([string] $input, $TriggerMetadata)
+
+$InstanceId = Start-DurableOrchestration -FunctionName $FunctionName -Input $input
+```
-# [Python](#tab/python)
+# [v2](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.route(route="orchestrators/{functionName}")
+@myApp.durable_client_input(client_name="client")
+async def durable_trigger(req: func.HttpRequest, client):
+ function_name = req.route_params.get('functionName')
+ instance_id = await client.start_new(function_name)
+ response = client.create_check_status_response(req, instance_id)
+ return response
+```
+
+# [v1](#tab/python-v1)
**`function.json`** ```json
async def main(msg: func.QueueMessage, starter: str) -> None:
payload = msg.get_body().decode('utf-8') instance_id = await client.start_new("HelloWorld", client_input=payload) ```+
-# [PowerShell](#tab/powershell)
-
-**function.json**
-```json
-{
- "bindings": [
- {
- "name": "input",
- "type": "queueTrigger",
- "queueName": "durable-function-trigger",
- "direction": "in"
- },
- {
- "name": "starter",
- "type": "durableClient",
- "direction": "in"
- }
- ]
-}
-```
-
-**run.ps1**
-```powershell
-param([string] $input, $TriggerMetadata)
-
-$InstanceId = Start-DurableOrchestration -FunctionName $FunctionName -Input $input
-```
-
-# [Java](#tab/java)
```java @FunctionName("QueueStart")
public void queueStart(
durableContext.getClient().scheduleNewOrchestrationInstance("HelloWorld", input); } ```--- More details on starting instances can be found in [Instance management](durable-functions-instance-management.md). ## Entity trigger
Entity triggers allow you to author [entity functions](durable-functions-entitie
Internally, this trigger binding polls the configured durable store for new entity operations that need to be executed.
-If you're authoring functions in .NET, the entity trigger is configured using the [EntityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.entitytriggerattribute) .NET attribute.
+The entity trigger is configured using the [EntityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.entitytriggerattribute) .NET attribute.
-If you're using JavaScript, Python, or PowerShell, the entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+> [!NOTE]
+> Entity triggers aren't yet supported for isolated worker process apps.
+The entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using JavaScript, Python, or PowerShell, the entity trigger is defined
} ```
+By default, the name of an entity is the name of the function.
> [!NOTE]
-> Entity triggers are not yet supported in Java or in the .NET-isolated worker.
+> Entity triggers aren't yet supported for Java.
+The way that you define a entity trigger depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+Using the `entity_trigger` decorator directly in your Python function code.
+
+# [v1](#tab/python-v1)
+The entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "entityName": "<Optional - name of the entity>",
+ "type": "entityTrigger",
+ "direction": "in"
+}
+```
By default, the name of an entity is the name of the function. ++ ### Trigger behavior Here are some notes about the entity trigger:
For more information and examples on defining and interacting with entity trigge
The entity client binding enables you to asynchronously trigger [entity functions](#entity-trigger). These functions are sometimes referred to as [client functions](durable-functions-types-features-overview.md#client-functions).
-If you're using .NET precompiled functions, you can bind to the entity client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) .NET attribute.
+You can bind to the entity client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) .NET attribute in .NET class library functions.
> [!NOTE] > The `[DurableClientAttribute]` can also be used to bind to the [orchestration client](#orchestration-client).
-If you're using scripting languages (like C# scripting, JavaScript, or Python) for development, the entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+The entity client is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using scripting languages (like C# scripting, JavaScript, or Python) f
} ```
+* `taskHub` - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from `host.json` is used. This value must match the value used by the target entity functions.
+* `connectionName` - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target entity functions. If not specified, the default storage account connection string for the function app is used.
+ > [!NOTE]
-> Entity clients are not yet supported in Java.
+> In most cases, we recommend that you omit the optional properties and rely on the default behavior.
+The way that you define a entity client depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+Using the `durable_client_input` decorator directly in your Python function code.
+
+# [v1](#tab/python-v1)
+The entity client is defined by the following JSON object in the `bindings` array of *function.json*:
+
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "taskHub": "<Optional - name of the task hub>",
+ "connectionName": "<Optional - name of the connection string app setting>",
+ "type": "durableClient",
+ "direction": "in"
+}
+```
* `taskHub` - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from `host.json` is used. This value must match the value used by the target entity functions. * `connectionName` - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target entity functions. If not specified, the default storage account connection string for the function app is used.
If you're using scripting languages (like C# scripting, JavaScript, or Python) f
> [!NOTE] > In most cases, we recommend that you omit the optional properties and rely on the default behavior. +
+> [!NOTE]
+> Entity clients aren't yet supported for Java.
+ For more information and examples on interacting with entities as a client, see the [Durable Entities](durable-functions-entities.md#access-entities) documentation. <a name="host-json"></a>
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Title: Azure Cosmos DB input binding for Functions 2.x and higher description: Learn to use the Azure Cosmos DB input binding in Azure Functions. Previously updated : 03/04/2022 Last updated : 03/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
For information on setup and configuration details, see the [overview](./functio
> When the collection is [partitioned](../cosmos-db/partitioning-overview.md#logical-partitions), lookup operations must also specify the partition key value. >
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example Unless otherwise noted, examples in this article target version 3.x of the [Azure Cosmos DB extension](functions-bindings-cosmosdb-v2.md). For use with extension version 4.x, you need to replace the string `collection` in property and attribute names with `container`.
This section contains the following examples that read a single document by spec
<a id="queue-trigger-look-up-id-from-json-python"></a>
+The examples depend on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+ ### Queue trigger, look up ID from JSON
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function reads a single document and updates the document's text value.
+The following example shows an Azure Cosmos DB input binding. The function reads a single document and updates the document's text value.
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.queue_trigger(arg_name="msg",
+ queue_name="outqueue",
+ connection="AzureWebJobsStorage")
+@app.cosmos_db_input(arg_name="documents",
+ database_name="MyDatabase",
+ collection_name="MyCollection",
+ id="{msg.payload_property}",
+ partition_key="{msg.payload_property}",
+ connection_string_setting="MyAccount_COSMOSDB")
+@app.cosmos_db_output(arg_name="outputDocument",
+ database_name="MyDatabase",
+ collection_name="MyCollection",
+ connection_string_setting="MyAccount_COSMOSDB")
+def test_function(msg: func.QueueMessage,
+ inputDocument: func.DocumentList,
+ outputDocument: func.Out[func.Document]):
+ document = documents[id]
+ document["text"] = "This was updated!"
+ doc = inputDocument[0]
+ doc["text"] = "This was updated!"
+ outputDocument.set(doc)
+ print(f"Updated document.")
+```
+
+# [v1](#tab/python-v1)
Here's the binding data in the *function.json* file:
def main(queuemsg: func.QueueMessage, documents: func.DocumentList) -> func.Docu
return document ``` ++ <a id="http-trigger-look-up-id-from-query-string-python"></a> ### HTTP trigger, look up ID from query string
-The following example shows a [Python function](functions-reference-python.md) that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+The following example shows a function that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+
+# [v2](#tab/python-v2)
+
+No equivalent sample for v2 at this time.
+
+# [v1](#tab/python-v1)
+ Here's the *function.json* file:
def main(req: func.HttpRequest, todoitems: func.DocumentList) -> str:
return 'OK' ``` ++ <a id="http-trigger-look-up-id-from-route-data-python"></a> ### HTTP trigger, look up ID from route data
-The following example shows a [Python function](functions-reference-python.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+The following example shows a function that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+
+# [v2](#tab/python-v2)
+
+No equivalent sample for v2 at this time.
+
+# [v1](#tab/python-v1)
Here's the *function.json* file:
def main(req: func.HttpRequest, todoitems: func.DocumentList) -> str:
return 'OK' ``` ++ <a id="queue-trigger-get-multiple-docs-using-sqlquery-python"></a> ### Queue trigger, get multiple docs, using SqlQuery
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
+The following example shows an Azure Cosmos DB input binding Python function that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
+# [v2](#tab/python-v2)
+
+No equivalent sample for v2 at this time.
+
+# [v1](#tab/python-v1)
+ Here's the binding data in the *function.json* file: ```json
Here's the binding data in the *function.json* file:
} ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `cosmos_db_input`:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The variable name used in function code that represents the list of documents with changes. |
+|`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |
+|`collection_name` | The name of the Azure CosmosDB collection being monitored. |
+|`connection_string_setting` | The connection string of the Azure Cosmos DB being monitored. |
+|`partition_key` | The partition key of the Azure Cosmos DB being monitored. |
+|`id` | The ID of the document to retrieve. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
From the [Java functions runtime library](/java/api/overview/azure/functions/run
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
Title: Azure Cosmos DB output binding for Functions 2.x and higher description: Learn to use the Azure Cosmos DB output binding in Azure Functions. Previously updated : 03/04/2022 Last updated : 03/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
The Azure Cosmos DB output binding lets you write a new document to an Azure Cos
For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example Unless otherwise noted, examples in this article target version 3.x of the [Azure Cosmos DB extension](functions-bindings-cosmosdb-v2.md). For use with extension version 4.x, you need to replace the string `collection` in property and attribute names with `container`.
For bulk insert form the objects first and then run the stringify function. Here
::: zone-end ::: zone pivot="programming-language-powershell"
-The following example show how to write data to Azure Cosmos DB using an output binding. The binding is declared in the function's configuration file (_functions.json_), and take data from a queue message and writes out to an Azure Cosmos DB document.
+The following example shows how to write data to Azure Cosmos DB using an output binding. The binding is declared in the function's configuration file (_functions.json_), and takes data from a queue message and writes out to an Azure Cosmos DB document.
```json {
Push-OutputBinding -Name EmployeeDocument -Value @{
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to write a document to an Azure Cosmos DB database as the output of a function.
+The following example demonstrates how to write a document to an Azure Cosmos DB database as the output of a function. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.route()
+@app.cosmos_db_output(arg_name="documents",
+ database_name="DB_NAME",
+ collection_name="COLLECTION_NAME",
+ create_if_not_exists=True,
+ connection_string_setting="CONNECTION_SETTING")
+def main(req: func.HttpRequest, documents: func.Out[func.Document]) -> func.HttpResponse:
+ request_body = req.get_body()
+ documents.set(func.Document.from_json(request_body))
+ return 'OK'
+```
+
+# [v1](#tab/python-v1)
The binding definition is defined in *function.json* where *type* is set to `cosmosDB`.
def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon
return 'OK' ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `cosmos_db_output`:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The variable name used in function code that represents the list of documents with changes. |
+|`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |
+|`collection_name` | The name of the Azure CosmosDB collection being monitored. |
+|`create_if_not_exists` | A Boolean value that indicates whether the database and collection should be created if they do not exist. |
+|`connection_string_setting` | The connection string of the Azure Cosmos DB being monitored. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
From the [Java functions runtime library](/java/api/overview/azure/functions/run
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
The Azure Cosmos DB Trigger uses the [Azure Cosmos DB change feed](../cosmos-db/
For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Write-Host "First document Id modified : $($Documents[0].id)"
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified.
+The following example shows an Azure Cosmos DB trigger binding. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
-Here's the binding data in the *function.json* file:
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="CosmosDBTrigger")
+@app.cosmos_db_trigger(arg_name="documents",
+ database_name="DB_NAME",
+ collection_name="COLLECTION_NAME",
+ connection_string_setting="CONNECTION_SETTING",
+ lease_collection_name="leases", create_lease_collection_if_not_exists="true")
+def test_function(documents: func.DocumentList) -> str:
+ if documents:
+ logging.info('Document id: %s', documents[0]['id'])
+```
+
+# [v1](#tab/python-v1)
+
+The function writes log messages when Azure Cosmos DB records are modified. Here's the binding data in the *function.json* file:
[!INCLUDE [functions-cosmosdb-trigger-attributes](../../includes/functions-cosmosdb-trigger-attributes.md)]
Here's the Python code:
logging.info('First document Id modified: %s', documents[0]['id']) ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotn
::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `cosmos_db_trigger`:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The variable name used in function code that represents the list of documents with changes. |
+|`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |
+|`collection_name` | The name of the Azure CosmosDB collection being monitored. |
+|`connection` | The connection string of the Azure Cosmos DB being monitored. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
From the [Java functions runtime library](/java/api/overview/azure/functions/run
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
description: Learn to write messages to Azure Event Hubs streams using Azure Fun
ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b Previously updated : 03/04/2022 Last updated : 03/03/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
Use the Event Hubs output binding to write events to an event stream. You must h
Make sure the required package references are in place before you try to implement an output binding.
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
++ ## Example ::: zone pivot="programming-language-csharp"
module.exports = function(context) {
Complete PowerShell examples are pending. ::: zone-end ::: zone pivot="programming-language-python"
-The following example shows an event hub trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes a message to an event hub.
+The following example shows an event hub trigger binding and a Python function that uses the binding. The function writes a message to an event hub. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="eventhub_output")
+@app.route(route="eventhub_output")
+@app.event_hub_output(arg_name="event",
+ event_hub_name="<EVENT_HUB_NAME>",
+ connection="<CONNECTION_SETTING>")
+def eventhub_output(req: func.HttpRequest, event: func.Out[str]):
+ body = req.get_body()
+ if body is not None:
+ event.set(body.decode('utf-8'))
+ else:
+ logging.info('req body is none')
+ return 'ok'
+```
+
+# [v1](#tab/python-v1)
The following examples show Event Hubs binding data in the *function.json* file.
def main(timer: func.TimerRequest) -> str:
return 'Message created at: {}'.format(timestamp) ``` ++ ::: zone-end ::: zone pivot="programming-language-java" The following example shows a Java function that writes a message containing the current time to an Event Hub.
The following table explains the binding configuration properties that you set i
::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `cosmos_db_trigger`:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The variable name used in function code that represents the event. |
+|`event_hub_name` | he name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. |
+|`connection` | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file, which differs by runtime version.
azure-functions Functions Bindings Event Hubs Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-trigger.md
Title: Azure Event Hubs trigger for Azure Functions
description: Learn to use Azure Event Hubs trigger in Azure Functions. ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b Previously updated : 03/04/2022 Last updated : 03/03/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
Title: Azure Functions HTTP trigger description: Learn how to call an Azure Function via HTTP. Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
For more information about HTTP bindings, see the [overview](./functions-binding
[!INCLUDE [HTTP client best practices](../../includes/functions-http-client-best-practices.md)]
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
++ ## Example ::: zone pivot="programming-language-csharp"
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows a trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request.
+The following example shows a trigger binding and a Python function that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import azure.functions as func
+import logging
+
+app = func.FunctionApp()
+
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully.",
+ status_code=200
+ )
+```
+
+# [v1](#tab/python-v1)
Here's the *function.json* file:
def main(req: func.HttpRequest) -> func.HttpResponse:
) ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
The following table explains the trigger configuration properties that you set i
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties for a trigger are defined in the `route` decorator, which adds HttpTrigger and HttpOutput binding:
+
+| Property | Description |
+|-|--|
+| `route` | Route for the http endpoint, if None, it will be set to function name if present or user defined python function name. |
+| `trigger_arg_name` | Argument name for HttpRequest, defaults to 'req'. |
+| `binding_arg_name` | Argument name for HttpResponse, defaults to '$return'. |
+| `methods` | A tuple of the HTTP methods to which the function responds. |
+| `auth_level` | Determines what keys, if any, need to be present on the request in order to invoke the function. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the trigger configuration properties that you set in the *function.json* file, which differs by runtime version.
public class HttpTriggerJava {
``` ::: zone-end As an example, the following *function.json* file defines a `route` property for an HTTP trigger with two parameters, `category` and `id`:
As an example, the following *function.json* file defines a `route` property for
} ``` +
+As an example, the following code defines a `route` property for an HTTP trigger with two parameters, `category` and `id`:
+
+# [v2](#tab/python-v2)
+
+```python
+@app.function_name(name="httpTrigger")
+@app.route(route="products/{category:alpha}/{id:int?}")
+```
+
+# [v1](#tab/python-v1)
+
+In the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "name": "req",
+ "direction": "in",
+ "methods": [ "get" ],
+ "route": "products/{category:alpha}/{id:int?}"
+ },
+ {
+ "type": "http",
+ "name": "res",
+ "direction": "out"
+ }
+ ]
+}
+```
+++ ::: zone-end ::: zone pivot="programming-language-javascript"
Route parameters that defined a function's `route` pattern are available to each
The following configuration shows how the `{id}` parameter is passed to the binding's `rowKey`.
+# [v2](#tab/python-v2)
+
+```python
+@app.table_input(arg_name="product", table_name="products",
+ row_key="{id}", partition_key="products",
+ connection="AzureWebJobsStorage")
+```
+
+# [v1](#tab/python-v1)
+ ```json { "type": "table",
The following configuration shows how the `{id}` parameter is passed to the bind
} ``` ++ When you use route parameters, an `invoke_URL_template` is automatically created for your function. Your clients can use the URL template to understand the parameters they need to pass in the URL when calling your function using its URL. Navigate to one of your HTTP-triggered functions in the [Azure portal](https://portal.azure.com) and select **Get function URL**. You can programmatically access the `invoke_URL_template` by using the Azure Resource Manager APIs for [List Functions](/rest/api/appservice/webapps/listfunctions) or [Get Function](/rest/api/appservice/webapps/getfunction).
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
Title: Azure Service Bus output bindings for Azure Functions
description: Learn to send Azure Service Bus messages from Azure Functions. ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
Use Azure Service Bus output binding to send queue or topic messages.
For information on setup and configuration details, see the [overview](functions-bindings-service-bus.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Push-OutputBinding -Name outputSbMsg -Value @{
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to write out to a Service Bus queue in Python.
+The following example demonstrates how to write out to a Service Bus queue in Python. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.route(route="put_message")
+@app.service_bus_topic_output(arg_name="message",
+ connection="<CONNECTION_SETTING>",
+ topic_name="<TOPIC_NAME>")
+def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
+ input_msg = req.params.get('message')
+ message.set(input_msg)
+ return 'OK'
+```
+
+# [v1](#tab/python-v1)
A Service Bus binding definition is defined in *function.json* where *type* is set to `serviceBus`.
C# script uses a *function.json* file for configuration instead of attributes. T
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `service_bus_topic_output`:
+
+| Property | Description |
+|-|--|
+| `arg_name` | The name of the variable that represents the queue or topic message in function code. |
+| `queue_name` | Name of the queue. Set only if sending queue messages, not for a topic. |
+| `topic_name` | Name of the topic. Set only if sending topic messages, not for a queue. |
+| `connection` | The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The `ServiceBusQueueOutput` and `ServiceBusTopicOutput` annotations are availabl
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file and the `ServiceBus` attribute.
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
Title: Azure Service Bus trigger for Azure Functions
description: Learn to run an Azure Function when as Azure Service Bus messages are created. ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
Starting with extension version 3.1.0, you can trigger on a session-enabled queu
For information on setup and configuration details, see the [overview](functions-bindings-service-bus.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Write-Host "PowerShell ServiceBus queue trigger function processed message: $myS
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to read a Service Bus queue message via a trigger.
+The following example demonstrates how to read a Service Bus queue message via a trigger. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
-A Service Bus binding is defined in *function.json* where *type* is set to `serviceBusTrigger`.
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="ServiceBusQueueTrigger1")
+@app.service_bus_queue_trigger(arg_name="msg",
+ queue_name="<QUEUE_NAME>",
+ connection="<CONNECTION_SETTING">)
+def test_function(msg: func.ServiceBusMessage):
+ logging.info('Python ServiceBus queue trigger processed message: %s',
+ msg.get_body().decode('utf-8'))
+```
+
+# [v1](#tab/python-v1)
+
+A Service Bus binding is defined in *function.json* where *type* is set to `serviceBusTrigger` and the queue is set by `queueName`.
```json {
def main(msg: func.ServiceBusMessage):
logging.info(result) ```+++
+The following example demonstrates how to read a Service Bus queue topic via a trigger.
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="ServiceBusTopicTrigger1")
+@app.service_bus_topic_trigger(arg_name="message",
+ topic_name="TOPIC_NAME",
+ connection="CONNECTION_SETTING",
+ subscription_name="SUBSCRIPTION_NAME")
+def test_function(message: func.ServiceBusMessage):
+ message_body = message.get_body().decode("utf-8")
+ logging.info("Python ServiceBus topic trigger processed message.")
+ logging.info("Message Body: " + message_body)
+```
+
+# [v1](#tab/python-v1)
+
+A Service Bus binding is defined in *function.json* where *type* is set to `serviceBusTrigger` and the topic is set by `topicName`.
+
+```json
+{
+ "scriptFile": "__init__.py",
+ "bindings": [
+   {
+     "type": "serviceBusTrigger",
+     "direction": "in",
+     "name": "msg",
+     "topicName": "inputtopic",
+     "connection": "AzureServiceBusConnectionString"
+   }
+ ]
+}
+```
+
+The code in *_\_init_\_.py* declares a parameter as `func.ServiceBusMessage`, which allows you to read the topic in your function.
+
+```python
+import json
+
+import azure.functions as azf
++
+def main(msg: azf.ServiceBusMessage) -> str:
+ result = json.dumps({
+ 'message_id': msg.message_id,
+ 'body': msg.get_body().decode('utf-8'),
+ 'content_type': msg.content_type,
+ 'delivery_count': msg.delivery_count,
+ 'expiration_time': (msg.expiration_time.isoformat() if
+ msg.expiration_time else None),
+ 'label': msg.label,
+ 'partition_key': msg.partition_key,
+ 'reply_to': msg.reply_to,
+ 'reply_to_session_id': msg.reply_to_session_id,
+ 'scheduled_enqueue_time': (msg.scheduled_enqueue_time.isoformat() if
+ msg.scheduled_enqueue_time else None),
+ 'session_id': msg.session_id,
+ 'time_to_live': msg.time_to_live,
+ 'to': msg.to,
+ 'user_properties': msg.user_properties,
+ })
+
+ logging.info(result)
+```
+++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
C# script uses a *function.json* file for configuration instead of attributes. T
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `service_bus_queue_trigger`:
+
+| Property | Description |
+|-|--|
+| `arg_name` | The name of the variable that represents the queue or topic message in function code. |
+| `queue_name` | Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |
+| `connection` | The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
See the trigger [example](#example) for more detail.
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
Title: Azure Blob storage input binding for Azure Functions description: Learn how to provide Azure Blob storage input binding data to an Azure Function. Previously updated : 03/04/2022 Last updated : 03/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
The input binding allows you to read blob storage data as input to an Azure Func
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($Inp
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows blob input and output bindings in a *function.json* file and [Python code](functions-reference-python.md) that uses the bindings. The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
+The following example shows blob input and output bindings. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+The code creates a copy of a blob.
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="BlobOutput1")
+@app.route(route="file")
+@app.blob_input(arg_name="inputblob",
+ path="sample-workitems/test.txt",
+ connection="<BLOB_CONNECTION_SETTING>")
+@app.blob_output(arg_name="outputblob",
+ path="newblob/test.txt",
+ connection="<BLOB_CONNECTION_SETTING>")
+def main(req: func.HttpRequest, inputblob: str, outputblob: func.Out[str]):
+ logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
+ outputblob.set(inputblob)
+ return "ok"
+```
+
+# [v1](#tab/python-v1)
+
+The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
return inputblob ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
The following table explains the binding configuration properties for C# script
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using decorators, the following properties on the `blob_input` and `blob_output` decorators define the Blob Storage triggers:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The name of the variable that represents the blob in function code. |
+|`path` | The path to the blob For the `blob_input` decorator, it's the blob read. For the `blob_output` decorator, it's the output or copy of the input blob. |
+|`connection` | The storage account connection string. |
+|`data_type` | For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The `@BlobInput` attribute gives you access to the blob that triggered the funct
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
Title: Azure Blob storage output binding for Azure Functions description: Learn how to provide Azure Blob storage output binding data to an Azure Function. Previously updated : 03/04/2022 Last updated : 03/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
The output binding allows you to modify and delete blob storage data in an Azure
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Push-OutputBinding -Name myOutputBlob -Value $myInputBlob
<!--Same example for input and output. -->
-The following example shows blob input and output bindings in a *function.json* file and [Python code](functions-reference-python.md) that uses the bindings. The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
+The following example shows blob input and output bindings. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+The code creates a copy of a blob.
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="BlobOutput1")
+@app.route(route="file")
+@app.blob_input(arg_name="inputblob",
+ path="sample-workitems/test.txt",
+ connection="<BLOB_CONNECTION_SETTING>")
+@app.blob_output(arg_name="outputblob",
+ path="newblob/test.txt",
+ connection="<BLOB_CONNECTION_SETTING>")
+def main(req: func.HttpRequest, inputblob: str, outputblob: func.Out[str]):
+ logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
+ outputblob.set(inputblob)
+ return "ok"
+```
+
+# [v1](#tab/python-v1)
+
+The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
The following table explains the binding configuration properties for C# script
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using decorators, the following properties on the `blob_input` and `blob_output` decorators define the Blob Storage triggers:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The name of the variable that represents the blob in function code. |
+|`path` | The path to the blob For the `blob_input` decorator, it's the blob read. For the `blob_output` decorator, it's the output or copy of the input blob. |
+|`connection` | The storage account connection string. |
+|`dataType` | For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
++
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The `@BlobOutput` attribute gives you access to the blob that triggered the func
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
Title: Azure Blob storage trigger for Azure Functions description: Learn how to run an Azure Function as Azure Blob storage data changes. Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
There are several ways to execute your function code based on changes to blobs i
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($Inp
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows a blob trigger binding in a *function.json* file and [Python code](functions-reference-python.md) that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
+The following example shows a blob trigger binding. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="BlobTrigger1")
+@app.blob_trigger(arg_name="myblob",
+ path="PATH/TO/BLOB",
+ connection="CONNECTION_SETTING")
+def test_function(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+```
+
+# [v1](#tab/python-v1)
+
+The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
Here's the *function.json* file: ```json
def main(myblob: func.InputStream):
``` ::: zone-end +++ ::: zone pivot="programming-language-csharp" ## Attributes
C# script uses a *function.json* file for configuration instead of attributes.
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using decorators, the following properties on the `blob_trigger` decorator define the Blob Storage trigger:
+
+| Property | Description |
+|-|--|
+|`arg_name` | Declares the parameter name in the function signature. When the function is triggered, this parameter's value has the contents of the queue message. |
+|`path` | The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#blob-name-patterns). |
+|`connection` | The storage account connection string. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
::: zone pivot="programming-language-java" ## Annotations
The `@BlobTrigger` attribute is used to give you access to the blob that trigger
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file. |function.json property |Description|
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
Title: Azure Queue storage output binding for Azure Functions description: Learn to create Azure Queue storage messages in Azure Functions. Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
Azure Functions can create new Azure Queue storage messages by setting up an out
For information on setup and configuration details, see the [overview](./functions-bindings-storage-queue.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to output single and multiple values to storage queues. The configuration needed for *function.json* is the same either way.
+The following example demonstrates how to output single and multiple values to storage queues. The configuration needed for *function.json* is the same either way. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="QueueOutput1")
+@app.route(route="message")
+@app.queue_output(arg_name="msg",
+ queue_name="<QUEUE_NAME>",
+ connection="<CONNECTION_SETTING>")
+def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
+ input_msg = req.params.get('name')
+ logging.info(input_msg)
+
+ msg.set(input_msg)
+
+ logging.info(f'name: {name}')
+ return 'OK'
+```
+# [v1](#tab/python-v1)
A Storage queue binding is defined in *function.json* where *type* is set to `queue`.
def main(req: func.HttpRequest, msg: func.Out[typing.List[str]]) -> func.HttpRes
return 'OK' ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
The following table explains the binding configuration properties that you set i
|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).| ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `queue_output`:
+
+| Property | Description |
+|-|--|
+| `arg_name` | The name of the variable that represents the queue in function code. |
+| `queue_name` | The name of the queue. |
+| `connection` | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The parameter associated with the [QueueOutput](/java/api/com.microsoft.azure.fu
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Title: Azure Queue storage trigger for Azure Functions description: Learn to run an Azure Function as Azure Queue storage data changes. Previously updated : 03/04/2022 Last updated : 02/27/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
The queue storage trigger runs a function as messages are added to Azure Queue storage.
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Here's the *function.json* file:
} ```
-The [configuration](#configuration) section explains these properties.
+The [section below](#attributes) explains these properties.
Here's the C# script code:
Write-Host "Dequeue count: $($TriggerMetadata.DequeueCount)"
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to read a queue message passed to a function via a trigger.
+The following example demonstrates how to read a queue message passed to a function via a trigger. The example depends on whether you use the v1 or v2 Python programming model.
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="QueueFunc")
+@app.queue_trigger(arg_name="msg", queue_name="inputqueue",
+ connection="storageAccountConnectionString") # Queue trigger
+@app.write_queue(arg_name="outputQueueItem", queue_name="outqueue",
+ connection="storageAccountConnectionString") # Queue output binding
+def test_function(msg: func.QueueMessage,
+ outputQueueItem: func.Out[str]) -> None:
+ logging.info('Python queue trigger function processed a queue item: %s',
+ msg.get_body().decode('utf-8'))
+ outputQueueItem.set('hello')
+```
+
+# [v1](#tab/python-v1)
A Storage queue trigger is defined in *function.json* where *type* is set to `queueTrigger`.
def main(msg: func.QueueMessage):
logging.info(result) ```+ ::: zone-end ::: zone pivot="programming-language-csharp"
public class QueueTriggerDemo {
|`connection` | Points to the storage account connection string. | ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using decorators, the following properties on the `queue_trigger` decorator define the Queue Storage trigger:
+
+| Property | Description |
+|-|--|
+|`arg_name` | Declares the parameter name in the function signature. When the function is triggered, this parameter's value has the contents of the queue message. |
+|`queue_name` | Declares the queue name in the storage account. |
+|`connection` | Points to the storage account connection string. |
+
+For Python functions defined by using function.json, see the Configuration section.
::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
The following table explains the binding configuration properties that you set in the *function.json* file and the `QueueTrigger` attribute. |function.json property | Description|
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
Title: Timer trigger for Azure Functions
description: Understand how to use timer triggers in Azure Functions. ms.assetid: d2f013d1-f458-42ae-baf8-1810138118ac Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
For information on how to manually run a timer-triggered function, see [Manually
Source code for the timer extension package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/) GitHub repository.
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
public void keepAlive(
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell"
-The following example shows a timer trigger binding in a *function.json* file and function code that uses the binding, where an instance representing the timer is passed to the function. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence.
+The following example shows a timer trigger binding and function code that uses the binding, where an instance representing the timer is passed to the function. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import datetime
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="mytimer")
+@app.schedule(schedule="0 */5 * * * *",
+ arg_name="mytimer",
+ run_on_startup=True)
+def test_function(mytimer: func.TimerRequest) -> None:
+ utc_timestamp = datetime.datetime.utcnow().replace(
+ tzinfo=datetime.timezone.utc).isoformat()
+ if mytimer.past_due:
+ logging.info('The timer is past due!')
+ logging.info('Python timer trigger function ran at %s', utc_timestamp)
+```
+
+# [v1](#tab/python-v1)
Here's the binding data in the *function.json* file:
module.exports = async function (context, myTimer) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The following table explains the binding configuration properties for C# script
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `schedule`:
+
+| Property | Description |
+|-|--|
+| `arg_name` | The name of the variable that represents the timer object in function code. |
+| `schedule` | A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
+| `run_on_startup` | If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
+| `use_monitor` | Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The `@TimerTrigger` annotation on the function defines the `schedule` using the
::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
++ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Triggers Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-triggers-python.md
- Title: Python V2 model Azure Functions triggers and bindings
-description: Provides examples of how to define Python triggers and bindings in Azure Functions using the preview v2 model
- Previously updated : 10/25/2022---
-# Python V2 model Azure Functions triggers and bindings (preview)
-
-The new Python v2 programming model in Azure Functions is intended to provide better alignment with Python development principles and with commonly used Python frameworks.
-
-The improved v2 programming model requires fewer files than the default model (v1), and specifically eliminates the need for a configuration file (`function.json`). Instead, triggers and bindings are represented in the `function_app.py` file as decorators. Moreover, functions can be logically organized with support for multiple functions to be stored in the same file. Functions within the same function application can also be stored in different files, and be referenced as blueprints.
-
-To learn more about using the new Python programming model for Azure Functions, see the [Azure Functions Python developer guide](./functions-reference-python.md). In addition to the documentation, [hints](https://aka.ms/functions-python-hints) are available in code editors that support type checking with .pyi files.
-
-This article contains example code snippets that define various triggers and bindings using the Python v2 programming model. To be able to run the code snippets below, ensure the following:
--- The function application is defined and named `app`.-- Confirm that the parameters within the trigger reflect values that correspond with your storage account.-- The name of the file the function is in must be `function_app.py`.-
-To create your first function in the new v2 model, see one of these quickstart articles:
-
-+ [Get started with Visual Studio](./create-first-function-vs-code-python.md)
-+ [Get started command prompt](./create-first-function-cli-python.md)
-
-## Azure Blob storage trigger
-
-The following code snippet defines a function triggered from Azure Blob Storage:
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="BlobTrigger1")
-@app.blob_trigger(arg_name="myblob", path="samples-workitems/{name}",
- connection="AzureWebJobsStorage")
-def test_function(myblob: func.InputStream):
- logging.info(f"Python blob trigger function processed blob \n"
- f"Name: {myblob.name}\n"
- f"Blob Size: {myblob.length} bytes")
-```
-
-## Azure Blob storage input binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="BlobInput1")
-@app.route(route="file")
-@app.blob_input(arg_name="inputblob",
- path="sample-workitems/{name}",
- connection="AzureWebJobsStorage")
-def test(req: func.HttpRequest, inputblob: bytes) -> func.HttpResponse:
- logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
- return inputblob
-```
-
-## Azure Blob storage output binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="BlobOutput1")
-@app.route(route="file")
-@app.blob_input(arg_name="inputblob",
- path="sample-workitems/test.txt",
- connection="AzureWebJobsStorage")
-@app.blob_output(arg_name="outputblob",
- path="newblob/test.txt",
- connection="AzureWebJobsStorage")
-def main(req: func.HttpRequest, inputblob: str, outputblob: func.Out[str]):
- logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
- outputblob.set(inputblob)
- return "ok"
-```
-
-## Azure Cosmos DB trigger
-
-The following code snippet defines a function triggered from an Azure Cosmos DB (SQL API):
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="CosmosDBTrigger1")
-@app.cosmos_db_trigger(arg_name="documents", database_name="<DB_NAME>", collection_name="<COLLECTION_NAME>", connection_string_setting=""AzureWebJobsStorage"",
- lease_collection_name="leases", create_lease_collection_if_not_exists="true")
-def test_function(documents: func.DocumentList) -> str:
- if documents:
- logging.info('Document id: %s', documents[0]['id'])
-```
-
-## Azure Cosmos DB input binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.route()
-@app.cosmos_db_input(
- arg_name="documents", database_name="<DB_NAME>",
- collection_name="<COLLECTION_NAME>",
- connection_string_setting="CONNECTION_SETTING")
-def cosmosdb_input(req: func.HttpRequest, documents: func.DocumentList) -> str:
- return func.HttpResponse(documents[0].to_json())
-```
-
-## Azure Cosmos DB output binding
-
-```python
-import logging
-import azure.functions as func
-@app.route()
-@app.cosmos_db_output(
- arg_name="documents", database_name="<DB_NAME>",
- collection_name="<COLLECTION_NAME>",
- create_if_not_exists=True,
- connection_string_setting="CONNECTION_SETTING")
-def main(req: func.HttpRequest, documents: func.Out[func.Document]) -> func.HttpResponse:
- request_body = req.get_body()
- documents.set(func.Document.from_json(request_body))
- return 'OK'
-```
-
-## Azure EventHub trigger
-
-The following code snippet defines a function triggered from an event hub instance:
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="EventHubTrigger1")
-@app.event_hub_message_trigger(arg_name="myhub", event_hub_name="samples-workitems",
- connection=""CONNECTION_SETTING"")
-def test_function(myhub: func.EventHubEvent):
- logging.info('Python EventHub trigger processed an event: %s',
- myhub.get_body().decode('utf-8'))
-```
-
-## Azure EventHub output binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="eventhub_output")
-@app.route(route="eventhub_output")
-@app.event_hub_output(arg_name="event",
- event_hub_name="samples-workitems",
- connection="CONNECTION_SETTING")
-def eventhub_output(req: func.HttpRequest, event: func.Out[str]):
- body = req.get_body()
- if body is not None:
- event.set(body.decode('utf-8'))
- else:
- logging.info('req body is none')
- return 'ok'
-```
-
-## HTTP trigger
-
-The following code snippet defines an HTTP triggered function:
-
-```python
-import azure.functions as func
-import logging
-app = func.FunctionApp(auth_level=func.AuthLevel.ANONYMOUS)
-@app.function_name(name="HttpTrigger1")
-@app.route(route="hello")
-def test_function(req: func.HttpRequest) -> func.HttpResponse:
- logging.info('Python HTTP trigger function processed a request.')
- name = req.params.get('name')
- if not name:
- try:
- req_body = req.get_json()
- except ValueError:
- pass
- else:
- name = req_body.get('name')
- if name:
- return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
- else:
- return func.HttpResponse(
- "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
- status_code=200
- )
-```
-
-## Azure Queue storage trigger
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="QueueTrigger1")
-@app.queue_trigger(arg_name="msg", queue_name="python-queue-items",
- connection=""AzureWebJobsStorage"")
-def test_function(msg: func.QueueMessage):
- logging.info('Python EventHub trigger processed an event: %s',
- msg.get_body().decode('utf-8'))
-```
-
-## Azure Queue storage output binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="QueueOutput1")
-@app.route(route="message")
-@app.queue_output(arg_name="msg", queue_name="python-queue-items", connection="AzureWebJobsStorage")
-def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
- input_msg = req.params.get('name')
- msg.set(input_msg)
- logging.info(input_msg)
- logging.info('name: {name}')
- return 'OK'
-```
-
-## Azure Service Bus queue trigger
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="ServiceBusQueueTrigger1")
-@app.service_bus_queue_trigger(arg_name="msg", queue_name="myinputqueue", connection="CONNECTION_SETTING")
-def test_function(msg: func.ServiceBusMessage):
- logging.info('Python ServiceBus queue trigger processed message: %s',
- msg.get_body().decode('utf-8'))
-```
-
-## Azure Service Bus topic trigger
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="ServiceBusTopicTrigger1")
-@app.service_bus_topic_trigger(arg_name="message", topic_name="mytopic", connection="CONNECTION_SETTING", subscription_name="testsub")
-def test_function(message: func.ServiceBusMessage):
- message_body = message.get_body().decode("utf-8")
- logging.info("Python ServiceBus topic trigger processed message.")
- logging.info("Message Body: " + message_body)
-```
-
-## Azure Service Bus Topic output binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.route(route="put_message")
-@app.service_bus_topic_output(
- arg_name="message",
- connection="CONNECTION_SETTING",
- topic_name="mytopic")
-def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
- input_msg = req.params.get('message')
- message.set(input_msg)
- return 'OK'
-```
-
-## Timer trigger
-
-```python
-import datetime
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="mytimer")
-@app.schedule(schedule="0 */5 * * * *", arg_name="mytimer", run_on_startup=True,
- use_monitor=False)
-def test_function(mytimer: func.TimerRequest) -> None:
- utc_timestamp = datetime.datetime.utcnow().replace(
- tzinfo=datetime.timezone.utc).isoformat()
- if mytimer.past_due:
- logging.info('The timer is past due!')
- logging.info('Python timer trigger function ran at %s', utc_timestamp)
-```
-
-## Durable Functions
-
-Durable Functions also provides preview support of the V2 programming model. To try it out, install the Durable Functions SDK (PyPI package `azure-functions-durable`) from version `1.2.2` or greater. You can reach us in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python) with feedback and suggestions.
--
-> [!NOTE]
-> Using [Extension Bundles](./functions-bindings-register.md#extension-bundles) is not currently supported when trying out the Python V2 programming model with Durable Functions, so you will need to manage your extensions manually.
-> To do this, remove the `extensionBundle` section of your `host.json` as described [here](./functions-run-local.md#install-extensions) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience.
-
-The Durable Functions Triggers and Bindings may be accessed from an instance `DFApp`, a subclass of `FunctionApp` that additionally exports Durable Functions-specific decorators.
-
-Below is a simple Durable Functions app that declares a simple sequential orchestrator, all in one file!
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-# An HTTP-Triggered Function with a Durable Functions Client binding
-@myApp.route(route="orchestrators/{functionName}")
-@myApp.durable_client_input(client_name="client")
-async def durable_trigger(req: func.HttpRequest, client):
- function_name = req.route_params.get('functionName')
- instance_id = await client.start_new(function_name)
- response = client.create_check_status_response(req, instance_id)
- return response
-
-# Orchestrator
-@myApp.orchestration_trigger(context_name="context")
-def my_orchestrator(context):
- result1 = yield context.call_activity("hello", "Seattle")
- result2 = yield context.call_activity("hello", "Tokyo")
- result3 = yield context.call_activity("hello", "London")
-
- return [result1, result2, result3]
-
-# Activity
-@myApp.activity_trigger(input_name="myInput")
-def hello(myInput: str):
- return "Hello " + myInput
-```
-
-> [!NOTE]
-> Previously, Durable Functions orchestrators needed an extra line of boilerplate, usually at the end of the file, to be indexed:
-> `main = df.Orchestrator.create(<name_of_orchestrator_function>)`.
-> This is no longer needed in V2 of the Python programming model. This applies to Entities as well, which required a similar boilerplate through
-> `main = df.Entity.create(<name_of_entity_function>)`.
-
-For reference, all Durable Functions Triggers and Bindings are listed below:
-
-### Orchestration Trigger
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-@myApp.orchestration_trigger(context_name="context")
-def my_orchestrator(context):
- result = yield context.call_activity("Hello", "Tokyo")
- return result
-```
-
-### Activity Trigger
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-@myApp.activity_trigger(input_name="myInput")
-def my_activity(myInput: str):
- return "Hello " + myInput
-```
-
-### DF Client Binding
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-@myApp.route(route="orchestrators/{functionName}")
-@myApp.durable_client_input(client_name="client")
-async def durable_trigger(req: func.HttpRequest, client):
- function_name = req.route_params.get('functionName')
- instance_id = await client.start_new(function_name)
- response = client.create_check_status_response(req, instance_id)
- return response
-```
-
-### Entity Trigger
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-@myApp.entity_trigger(context_name="context")
-def entity_function(context):
- current_value = context.get_state(lambda: 0)
- operation = context.operation_name
- if operation == "add":
- amount = context.get_input()
- current_value += amount
- elif operation == "reset":
- current_value = 0
- elif operation == "get":
- pass
-
- context.set_state(current_value)
- context.set_result(current_value)
-```
-
-## Next steps
-
-+ [Python developer guide](./functions-reference-python.md)
-+ [Get started with Visual Studio](./create-first-function-vs-code-python.md)
-+ [Get started command prompt](./create-first-function-cli-python.md)
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-vnet.md
Title: Use private endpoints to integrate Azure Functions with a virtual network description: This tutorial shows you how to connect a function to an Azure virtual network and lock it down by using private endpoints. Previously updated : 2/10/2023 Last updated : 3/24/2023 #Customer intent: As an enterprise developer, I want to create a function that can connect to a virtual network with private endpoints to secure my function app. # Tutorial: Integrate Azure Functions with an Azure virtual network by using private endpoints
-This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using private endpoints. You'll create a function by using a storage account that's locked behind a virtual network. The virtual network uses a Service Bus queue trigger.
+This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using private endpoints. You create a new function app using a new storage account that's locked behind a virtual network via the Azure portal. The virtual network uses a Service Bus queue trigger.
In this tutorial, you'll: > [!div class="checklist"]
-> * Create a function app in the Premium plan.
-> * Create Azure resources, such as the Service Bus, storage account, and virtual network.
-> * Lock down your storage account behind a private endpoint.
+> * Create a function app in the Elastic Premium plan with virtual network integration and private endpoints.
+> * Create Azure resources, such as the Service Bus
> * Lock down your Service Bus behind a private endpoint. > * Deploy a function app that uses both the Service Bus and HTTP triggers.
-> * Lock down your function app behind a private endpoint.
> * Test to see that your function app is secure inside the virtual network. > * Clean up resources. ## Create a function app in a Premium plan
-You'll create a .NET function app in the Premium plan because this tutorial uses C#. Other languages are also supported in Windows. The Premium plan provides serverless scale while supporting virtual network integration.
+You create a C# function app in an [Elastic Premium plan](./functions-premium-plan.md), which supports networking capabilities such as virtual network integration on create along with serverless scale. This tutorial uses C# and Windows. Other languages and Linux are also supported.
1. On the Azure portal menu or the **Home** page, select **Create a resource**.
You'll create a .NET function app in the Premium plan because this tutorial uses
| Setting | Suggested value | Description | | | - | -- | | **Subscription** | Your subscription | Subscription under which this new function app is created. |
- | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you'll create your function app. |
+ | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you create your function app. |
| **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. | |**Publish**| Code | Choose to publish code files or a Docker container. | | **Runtime stack** | .NET | This tutorial uses .NET. | | **Version** | 6 | This tutorial uses .NET 6.0 running [in the same process as the Functions host](./functions-dotnet-class-library.md). | |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
+ |**Operating system**| Windows | This tutorial uses Windows but also works for Linux. |
+ | **[Plan](./functions-scale.md)** | Premium | Hosting plan that defines how resources are allocated to your function app. By default, when you select **Premium**, a new App Service plan is created. The default **Sku and size** is **EP1**, where *EP* stands for _elastic premium_. For more information, see the list of [Premium SKUs](./functions-premium-plan.md#available-instance-skus).<br/><br/>When you run JavaScript functions on a Premium plan, choose an instance that has fewer vCPUs. For more information, see [Choose single-core Premium plans](./functions-reference-node.md#considerations-for-javascript-functions). |
1. Select **Next: Hosting**. On the **Hosting** page, enter the following settings. | Setting | Suggested value | Description | | | - | -- |
- | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters long. They may contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](./storage-considerations.md#storage-account-requirements). |
- |**Operating system**| Windows | This tutorial uses Windows. |
- | **[Plan](./functions-scale.md)** | Premium | Hosting plan that defines how resources are allocated to your function app. By default, when you select **Premium**, a new App Service plan is created. The default **Sku and size** is **EP1**, where *EP* stands for _elastic premium_. For more information, see the list of [Premium SKUs](./functions-premium-plan.md#available-instance-skus).<br/><br/>When you run JavaScript functions on a Premium plan, choose an instance that has fewer vCPUs. For more information, see [Choose single-core Premium plans](./functions-reference-node.md#considerations-for-javascript-functions). |
+ | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters long. They may contain numbers and lowercase letters only. You can also use an existing account that isn't restricted by firewall rules and meets the [storage account requirements](./storage-considerations.md#storage-account-requirements). When using Functions with a locked down storage account, a v2 storage account is needed. This is the default storage version created when creating a function app with networking capabilities through the create blade. |
+
+1. Select **Next: Networking**. On the **Networking** page, enter the following settings.
+
+ > [!NOTE]
+ > Some of these settings aren't visible until other options are selected.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Enable network injection** | On | The ability to configure your application with VNet integration at creation appears in the portal window after this option is switched to **On**. |
+ | **Virtual Network** | Create New | Select the **Create New** field. In the pop-out screen, provide a name for your virtual network and select **Ok**. Options to restrict inbound and outbound access to your function app on create are displayed. You must explicitly enable VNet integration in the **Outbound access** portion of the window to restrict outbound access. |
+
+ Enter the following settings for the **Inbound access** section. This step creates a private endpoint on your function app.
+
+ > [!TIP]
+ > To continue interacting with your function app from portal, you'll need to add your local computer to the virtual network. If you don't wish to restrict inbound access, skip this step.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Enable private endpoints** | On | The ability to configure your application with VNet integration at creation appears in the portal after this option is enabled. |
+ | **Private endpoint name** | myInboundPrivateEndpointName | Name that identifies your new function app private endpoint. |
+ | **Inbound subnet** | Create New | This option creates a new subnet for your inbound private endpoint. Multiple private endpoints may be added to a singular subnet. Provide a **Subnet Name**. The **Subnet Address Block** may be left at the default value. Select **Ok**. To learn more about subnet sizing, see [Subnets](functions-networking-options.md#subnets). |
+ | **DNS** | Azure Private DNS Zone | This value indicates which DNS server your private endpoint uses. In most cases if you're working within Azure, Azure Private DNS Zone is the DNS zone you should use as using **Manual** for custom DNS zones have increased complexity. |
+
+ Enter the following settings for the **Outbound access** section. This step integrates your function app with a virtual network on creation. It also exposes options to create private endpoints on your storage account and restrict your storage account from network access on create. When function app is vnet integrated, all outbound traffic by default goes [through the vnet.](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works).
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Enable VNet Integration** | On | This integrates your function app with a VNet on create and direct all outbound traffic through the VNet. |
+ | **Outbound subnet** | Create new | This creates a new subnet for your function app's VNet integration. A function app can only be VNet integrated with an empty subnet. Provide a **Subnet Name**. The **Subnet Address Block** may be left at the default value. If you wish to configure it, please learn more about Subnet sizing here. Select **Ok**. The option to create **Storage private endpoints** is displayed. To use your function app with virtual networks, you need to join it to a subnet. |
+
+ Enter the following settings for the **Storage private endpoint** section. This step creates private endpoints for the blob, queue, file, and table endpoints on your storage account on create. This effectively integrates your storage account with the VNet.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Add storage private endpoint** | On | The ability to configure your application with VNet integration at creation is displayed in the portal after this option is enabled. |
+ | **Private endpoint name** | myInboundPrivateEndpointName | Name that identifies your storage account private endpoint. |
+ | **Private endpoint subnet** | Create New | This creates a new subnet for your inbound private endpoint on the storage account. Multiple private endpoints may be added to a singular subnet. Provide a **Subnet Name**. The **Subnet Address Block** may be left at the default value. If you wish to configure it, please learn more about Subnet sizing here. Select **Ok**. |
+ | **DNS** | Azure Private DNS Zone | This value indicates which DNS server your private endpoint uses. In most cases if you're working within Azure, Azure Private DNS Zone is the DNS zone you should use as using **Manual** for custom DNS zones will have increased complexity. |
1. Select **Next: Monitoring**. On the **Monitoring** page, enter the following settings.
You'll create a .NET function app in the Premium plan because this tutorial uses
1. Select **Review + create** to review the app configuration selections.
-1. On the **Review + create** page, review your settings. Then select **Create** to provision and deploy the function app.
+1. On the **Review + create** page, review your settings. Then select **Create** to create and deploy the function app.
1. In the upper-right corner of the portal, select the **Notifications** icon and watch for the **Deployment succeeded** message.
You'll create a .NET function app in the Premium plan because this tutorial uses
Congratulations! You've successfully created your premium function app.
-## Create Azure resources
-
-Next, you'll create a storage account, a Service Bus, and a virtual network.
-### Create a storage account
-
-Your virtual networks will need a storage account that's separate from the one you created with your function app.
-
-1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-
-1. On the **New** page, search for *storage account*. Then select **Create**.
-
-1. On the **Basics** tab, use the following table to configure the storage account settings. All other settings can use the default values.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Name** | mysecurestorage| The name of the storage account that the private endpoint will be applied to. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
-
-1. Select **Review + create**. After validation finishes, select **Create**.
+> [!NOTE]
+> Some deployments may occassionally fail to create the private endpoints in the storage account with the error 'StorageAccountOperationInProgress'. This failure occurs even though the function app itself gets created successfully. When you encounter such an error, delete the function app and retry the operation. You can instead create the private endpoints on the storage account manually.
### Create a Service Bus
+Next, you create a Service Bus instance that is used to test the functionality of your function app's network capabilities in this tutorial.
+ 1. On the Azure portal menu or the **Home** page, select **Create a resource**. 1. On the **New** page, search for *Service Bus*. Then select **Create**.
Your virtual networks will need a storage account that's separate from the one y
| Setting | Suggested value | Description | | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **Subscription** | Your subscription | The subscription in which your resources are created. |
| **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Namespace name** | myServiceBus| The name of the Service Bus that the private endpoint will be applied to. |
+ | **Namespace name** | myServiceBus| The name of the Service Bus instance for which the private endpoint is enabled. |
| **[Location](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. | | **Pricing tier** | Premium | Choose this tier to use private endpoints with Azure Service Bus. | 1. Select **Review + create**. After validation finishes, select **Create**.
-### Create a virtual network
-
-The Azure resources in this tutorial either integrate with or are placed within a virtual network. You'll use private endpoints to contain network traffic within the virtual network.
-
-The tutorial creates two subnets:
-- **default**: Subnet for private endpoints. Private IP addresses are given from this subnet.-- **functions**: Subnet for Azure Functions virtual network integration. This subnet is delegated to the function app.-
-Create the virtual network to which the function app integrates:
-
-1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-
-1. On the **New** page, search for *virtual network*. Then select **Create**.
-
-1. On the **Basics** tab, use the following table to configure the virtual network settings.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Name** | myVirtualNet| The name of the virtual network to which your function app will connect. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
-
-1. On the **IP Addresses** tab, select **Add subnet**. Use the following table to configure the subnet settings.
-
- :::image type="content" source="./media/functions-create-vnet/1-create-vnet-ip-address.png" alt-text="Screenshot of the Create virtual network configuration view.":::
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subnet name** | functions | The name of the subnet to which your function app will connect. |
- | **Subnet address range** | 10.0.1.0/24 | The subnet address range. In the preceding image, notice that the IPv4 address space is 10.0.0.0/16. If the value were 10.1.0.0/16, the recommended subnet address range would be 10.1.1.0/24. |
-
-1. Select **Review + create**. After validation finishes, select **Create**.
-
-## Lock down your storage account
-
-Azure private endpoints are used to connect to specific Azure resources by using a private IP address. This connection ensures that network traffic remains within the chosen virtual network and access is available only for specific resources.
-
-Create the private endpoints for Azure Files Storage, Azure Blob Storage and Azure Table Storage by using your storage account:
-
-1. In your new storage account, in the menu on the left, select **Networking**.
-
-1. On the **Private endpoint connections** tab, select **Private endpoint**.
-
- :::image type="content" source="./media/functions-create-vnet/2-navigate-private-endpoint-store.png" alt-text="Screenshot of how to create private endpoints for the storage account.":::
-
-1. On the **Basics** tab, use the private endpoint settings shown in the following table.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
- | **Name** | file-endpoint | The name of the private endpoint for files from your storage account. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region where you created your storage account. |
-
-1. On the **Resource** tab, use the private endpoint settings shown in the following table.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
- | **Resource** | mysecurestorage | The storage account you created. |
- | **Target sub-resource** | file | The private endpoint that will be used for files from the storage account. |
-
-1. On the **Configuration** tab, for the **Subnet** setting, choose **default**.
-
-1. Select **Review + create**. After validation finishes, select **Create**. Resources in the virtual network can now communicate with storage files.
-
-1. Create another private endpoint for blobs. On the **Resources** tab, use the settings shown in the following table. For all other settings, use the same values you used to create the private endpoint for files.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
- | **Name** | blob-endpoint | The name of the private endpoint for blobs from your storage account. |
- | **Resource** | mysecurestorage | The storage account you created. |
- | **Target sub-resource** | blob | The private endpoint that will be used for blobs from the storage account. |
-1. Create another private endpoint for tables. On the **Resources** tab, use the settings shown in the following table. For all other settings, use the same values you used to create the private endpoint for files.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
- | **Name** | table-endpoint | The name of the private endpoint for blobs from your storage account. |
- | **Resource** | mysecurestorage | The storage account you created. |
- | **Target sub-resource** | table | The private endpoint that will be used for tables from the storage account. |
-1. After the private endpoints are created, return to the **Firewalls and virtual networks** section of your storage account.
-1. Ensure **Selected networks** is selected. It's not necessary to add an existing virtual network.
-
-Resources in the virtual network can now communicate with the storage account using the private endpoint.
## Lock down your Service Bus Create the private endpoint to lock down your Service Bus:
Create the private endpoint to lock down your Service Bus:
| Setting | Suggested value | Description | | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **Subscription** | Your subscription | The subscription in which your resources are created. |
| **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Name** | sb-endpoint | The name of the private endpoint for files from your storage account. |
+ | **Name** | sb-endpoint | The name of the private endpoint for the service bus. |
| **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your storage account. | 1. On the **Resource** tab, use the private endpoint settings shown in the following table.
Create the private endpoint to lock down your Service Bus:
| **Subscription** | Your subscription | The subscription under which your resources are created. | | **Resource type** | Microsoft.ServiceBus/namespaces | The resource type for the Service Bus. | | **Resource** | myServiceBus | The Service Bus you created earlier in the tutorial. |
- | **Target subresource** | namespace | The private endpoint that will be used for the namespace from the Service Bus. |
+ | **Target subresource** | namespace | The private endpoint that is used for the namespace from the Service Bus. |
-1. On the **Configuration** tab, for the **Subnet** setting, choose **default**.
+1. On the **Virtual Network** tab, for the **Subnet** setting, choose **default**.
1. Select **Review + create**. After validation finishes, select **Create**.
-1. After the private endpoint is created, return to the **Firewalls and virtual networks** section of your Service Bus namespace.
+1. After the private endpoint is created, return to the **Networking** section of your Service Bus namespace and check the **Public Access** tab.
1. Ensure **Selected networks** is selected. 1. Select **+ Add existing virtual network** to add the recently created virtual network. 1. On the **Add networks** tab, use the network settings from the following table:
Create the private endpoint to lock down your Service Bus:
| Setting | Suggested value | Description| ||--|| | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Virtual networks** | myVirtualNet | The name of the virtual network to which your function app will connect. |
- | **Subnets** | functions | The name of the subnet to which your function app will connect. |
+ | **Virtual networks** | myVirtualNet | The name of the virtual network to which your function app connects. |
+ | **Subnets** | functions | The name of the subnet to which your function app connects. |
1. Select **Add your client IP address** to give your current client IP access to the namespace. > [!NOTE]
Create the private endpoint to lock down your Service Bus:
Resources in the virtual network can now communicate with the Service Bus using the private endpoint.
-## Create a file share
-
-1. In the storage account you created, in the menu on the left, select **File shares**.
-
-1. Select **+ File shares**. For the purposes of this tutorial, name the file share *files*.
-
- :::image type="content" source="./media/functions-create-vnet/4-create-file-share.png" alt-text="Screenshot of how to create a file share in the storage account.":::
-
-1. Select **Create**.
-
-## Get the storage account connection string
-
-1. In the storage account you created, in the menu on the left, select **Access keys**.
-
-1. Select **Show keys**. Copy and save the connection string of **key1**. You'll need this connection string when you configure the app settings.
-
- :::image type="content" source="./media/functions-create-vnet/5-get-store-connection-string.png" alt-text="Screenshot of how to get a storage account connection string.":::
- ## Create a queue
-Create the queue where your Azure Functions Service Bus trigger will get events:
+Create the queue where your Azure Functions Service Bus trigger gets events:
1. In your Service Bus, in the menu on the left, select **Queues**.
Create the queue where your Azure Functions Service Bus trigger will get events:
1. In your Service Bus, in the menu on the left, select **Shared access policies**.
-1. Select **RootManageSharedAccessKey**. Copy and save the **Primary Connection String**. You'll need this connection string when you configure the app settings.
+1. Select **RootManageSharedAccessKey**. Copy and save the **Primary Connection String**. You need this connection string when you configure the app settings.
:::image type="content" source="./media/functions-create-vnet/7-get-service-bus-connection-string.png" alt-text="Screenshot of how to get a Service Bus connection string.":::
-## Integrate the function app
-
-To use your function app with virtual networks, you need to join it to a subnet. You'll use a specific subnet for the Azure Functions virtual network integration. You'll use the default subnet for other private endpoints you create in this tutorial.
-
-1. In your function app, in the menu on the left, select **Networking**.
-
-1. Under **VNet Integration**, select **Click here to configure**.
-
- :::image type="content" source="./media/functions-create-vnet/8-connect-app-vnet.png" alt-text="Screenshot of how to go to virtual network integration.":::
-
-1. Select **Add VNet**.
-
-1. Under **Virtual Network**, select the virtual network you created earlier.
-
-1. Select the **functions** subnet you created earlier. Select **OK**. Your function app is now integrated with your virtual network!
-
- If the virtual network and function app are in different subscriptions, you need to first provide **Contributor** access to the service principal **Microsoft Azure App Service** on the virtual network.
-
- :::image type="content" source="./media/functions-create-vnet/9-connect-app-subnet.png" alt-text="Screenshot of how to connect a function app to a subnet.":::
-
-1. Ensure that the **Route All** configuration setting is set to **Enabled**.
-
- :::image type="content" source="./media/functions-create-vnet/10-enable-route-all.png" alt-text="Screenshot of how to enable route all functionality.":::
- ## Configure your function app settings 1. In your function app, in the menu on the left, select **Configuration**.
-1. To use your function app with virtual networks, update the app settings shown in the following table. To add or edit a setting, select **+ New application setting** or the **Edit** icon in the rightmost column of the app settings table. When you finish, select **Save**.
+1. To use your function app with virtual networks and service bus, update the app settings shown in the following table. To add or edit a setting, select **+ New application setting** or the **Edit** icon in the rightmost column of the app settings table. When you finish, select **Save**.
| Setting | Suggested value | Description | | | - | - |
- | **AzureWebJobsStorage** | mysecurestorageConnectionString | The connection string of the storage account you created. This storage connection string is from the [Get the storage account connection string](#get-the-storage-account-connection-string) section. This setting allows your function app to use the secure storage account for normal operations at runtime. |
- | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | mysecurestorageConnectionString | The connection string of the storage account you created. This setting allows your function app to use the secure storage account for Azure Files, which is used during deployment. |
- | **WEBSITE_CONTENTSHARE** | files | The name of the file share you created in the storage account. Use this setting with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. |
| **SERVICEBUS_CONNECTION** | myServiceBusConnectionString | Create this app setting for the connection string of your Service Bus. This storage connection string is from the [Get a Service Bus connection string](#get-a-service-bus-connection-string) section.| | **WEBSITE_CONTENTOVERVNET** | 1 | Create this app setting. A value of 1 enables your function app to scale when your storage account is restricted to a virtual network. |
-1. In the **Configuration** view, select the **Function runtime settings** tab.
-
-1. Set **Runtime Scale Monitoring** to **On**. Then select **Save**. Runtime-driven scaling allows you to connect non-HTTP trigger functions to services that run inside your virtual network.
+1. Since you're using an Elastic Premium hosting plan, In the **Configuration** view, select the **Function runtime settings** tab. Set **Runtime Scale Monitoring** to **On**. Then select **Save**. Runtime-driven scaling allows you to connect non-HTTP trigger functions to services that run inside your virtual network.
:::image type="content" source="./media/functions-create-vnet/11-enable-runtime-scaling.png" alt-text="Screenshot of how to enable runtime-driven scaling for Azure Functions.":::
+> [!NOTE]
+> Runtime scaling isn't needed for function apps hosted in a Dedicated App Service plan.
+ ## Deploy a Service Bus trigger and HTTP trigger > [!NOTE]
-> Enabling Private Endpoints on a Function App also makes the Source Control Manager (SCM) site publicly inaccessible. The following instructions give deployment directions using the Deployment Center within the Function App. Alternatively, use [zip deploy](functions-deployment-technologies.md#zip-deploy) or [self-hosted](/azure/devops/pipelines/agents/docker) agents that are deployed into a subnet on the virtual network.
+> Enabling private endpoints on a function app also makes the Source Control Manager (SCM) site publicly inaccessible. The following instructions give deployment directions using the Deployment Center within the function app. Alternatively, use [zip deploy](functions-deployment-technologies.md#zip-deploy) or [self-hosted](/azure/devops/pipelines/agents/docker) agents that are deployed into a subnet on the virtual network.
1. In GitHub, go to the following sample repository. It contains a function app and two functions, an HTTP trigger, and a Service Bus queue trigger.
To use your function app with virtual networks, you need to join it to a subnet.
| **Repository** | functions-vnet-tutorial | The repository forked from https://github.com/Azure-Samples/functions-vnet-tutorial. | | **Branch** | main | The main branch of the repository you created. | | **Runtime stack** | .NET | The sample code is in C#. |
- | **Version** | v4.0 | The runtime version. |
+ | **Version** | .NET Core 3.1 | The runtime version. |
1. Select **Save**.
To use your function app with virtual networks, you need to join it to a subnet.
Congratulations! You've successfully deployed your sample function app.
-## Lock down your function app
-
-Now create the private endpoint to lock down your function app. This private endpoint will connect your function app privately and securely to your virtual network by using a private IP address.
-
-For more information, see the [private endpoint documentation](../private-link/private-endpoint-overview.md).
-
-1. In your function app, in the menu on the left, select **Networking**.
-
-1. Under **Private Endpoint Connections**, select **Configure your private endpoint connections**.
-
- :::image type="content" source="./media/functions-create-vnet/14-navigate-app-private-endpoint.png" alt-text="Screenshot of how to navigate to a function app private endpoint.":::
-
-1. Select **Add**.
-
-1. On the pane that opens, use the following private endpoint settings:
-
- :::image type="content" source="./media/functions-create-vnet/15-create-app-private-endpoint.png" alt-text="Screenshot of how to create a function app private endpoint. The name is functionapp-endpoint. The subscription is 'Private Test Sub CACHHAI'. The virtual network is MyVirtualNet-tutorial. The subnet is default.":::
-
-1. Select **OK** to add the private endpoint.
-
-Congratulations! You've successfully secured your function app, Service Bus, and storage account by adding private endpoints!
- ### Test your locked-down function app 1. In your function app, in the menu on the left, select **Functions**.
Congratulations! You've successfully secured your function app, Service Bus, and
1. In the menu on the left, select **Monitor**.
-You'll see that you can't monitor your app. Your browser doesn't have access to the virtual network, so it can't directly access resources within the virtual network.
+You see that you can't monitor your app. Your browser doesn't have access to the virtual network, so it can't directly access resources within the virtual network.
Here's an alternative way to monitor your function by using Application Insights:
In this tutorial, you created a Premium function app, storage account, and Servi
Use the following links to learn more Azure Functions networking options and private endpoints:
+- [How to configure Azure Functions with a virtual network](./configure-networking-how-to.md)
- [Networking options in Azure Functions](./functions-networking-options.md) - [Azure Functions Premium plan](./functions-premium-plan.md) - [Service Bus private endpoints](../service-bus-messaging/private-link-service.md)
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
This article describes the networking features available across the hosting opti
The hosting models have different levels of network isolation available. Choosing the correct one helps you meet your network isolation requirements.
-You can host function apps in a couple of ways:
+You can host function apps in several ways:
* You can choose from plan options that run on a multitenant infrastructure, with various levels of virtual network connectivity and scaling options: * The [Consumption plan](consumption-plan.md) scales dynamically in response to load and offers minimal network isolation options.
You can host function apps in a couple of ways:
[!INCLUDE [functions-networking-features](../../includes/functions-networking-features.md)]
-## Quick start resources
+## Quickstart resources
Use the following resources to quickly get started with Azure Functions networking scenarios. These resources are referenced throughout the article. * ARM, Bicep, and Terraform templates:
- * [Private HTTP Triggered Function App](https://github.com/Azure-Samples/function-app-with-private-http-endpoint)
- * [Private Event Hubs Triggered Function App](https://github.com/Azure-Samples/function-app-with-private-eventhub)
+ * [Private HTTP triggered function app](https://github.com/Azure-Samples/function-app-with-private-http-endpoint)
+ * [Private Event Hubs triggered function app](https://github.com/Azure-Samples/function-app-with-private-eventhub)
* ARM templates only:
- * [Function App with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
- * [Azure Function App with Virtual Network Integration](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-vnet-integration).
+ * [Function app with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
+ * [Azure function app with Virtual Network Integration](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-vnet-integration).
* Tutorials:
+ * [Integrate Azure Functions with an Azure virtual network by using private endpoints](functions-create-vnet.md)
* [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). * [Control Azure Functions outbound IP with an Azure virtual network NAT gateway](functions-how-to-use-nat-gateway.md).
To call other services that have a private endpoint connection, such as storage
### Service endpoints
-Using service endpoints, you can restrict a number of Azure services to selected virtual network subnets to provide a higher level of security. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following:
+Using service endpoints, you can restrict many Azure services to selected virtual network subnets to provide a higher level of security. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following:
1. Configure regional virtual network integration with your function app to connect to a specific subnet. 1. Go to the destination service and configure service endpoints against the integration subnet.
To learn more, see [Virtual network service endpoints](../virtual-network/virtua
To restrict access to a specific subnet, create a restriction rule with a **Virtual Network** type. You can then select the subscription, virtual network, and subnet that you want to allow or deny access to.
-If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they'll be automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
+If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they are automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
-If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app will be configured for service endpoints in anticipation of having them enabled later on the subnet.
+If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app is configured for service endpoints in anticipation of having them enabled later on the subnet.
![Screenshot of the "Add IP Restriction" pane with the Virtual Network type selected.](../app-service/media/app-service-ip-restrictions/access-restrictions-vnet-add.png)
To learn how to set up virtual network integration, see [Enable virtual network
### Enable virtual network integration
-1. Go to the **Networking** blade in the Function App portal. Under **VNet Integration**, select **Click here to configure**.
+1. In your function app in the [Azure portal](https://portal.azure.com), select **Networking**, then under **VNet Integration** select **Click here to configure**.
1. Select **Add VNet**.
To learn how to set up virtual network integration, see [Enable virtual network
:::image type="content" source="./media/functions-networking-options/vnet-int-add-vnet-function-app.png" alt-text="Select the VNet"::: * The Functions Premium Plan only supports regional virtual network integration. If the virtual network is in the same region, either create a new subnet or select an empty, pre-existing subnet.
- * To select a virtual network in another region, you must have a virtual network gateway provisioned with point to site enabled. Virtual network integration across regions is only supported for Dedicated plans, but global peerings will work with regional virtual network integration.
-During the integration, your app is restarted. When integration is finished, you'll see details on the virtual network you're integrated with. By default, Route All will be enabled, and all traffic will be routed into your virtual network.
+ * To select a virtual network in another region, you must have a virtual network gateway provisioned with point to site enabled. Virtual network integration across regions is only supported for Dedicated plans, but global peerings work with regional virtual network integration.
+
+During the integration, your app is restarted. When integration is finished, you see details on the virtual network you're integrated with. By default, Route All is enabled, and all traffic is routed into your virtual network.
If you wish for only your private traffic ([RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) to be routed, please follow the steps in the [app service documentation](../app-service/overview-vnet-integration.md#application-routing).
There are some limitations with using virtual network:
Virtual network integration depends on a dedicated subnet. When you provision a subnet, the Azure subnet loses five IPs from the start. One address is used from the integration subnet for each plan instance. When you scale your app to four instances, then four addresses are used.
-When you scale up or down in size, the required address space is doubled for a short period of time. This affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the impact this has on horizontal scale:
+When you scale up or down in size, the required address space is doubled for a short period of time. This affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the effect this has on horizontal scale:
| CIDR block size | Max available addresses | Max horizontal scale (instances)<sup>*</sup> | |--|-||
When you scale up or down in size, the required address space is doubled for a s
| /27 | 27 | 13 | | /26 | 59 | 29 |
-<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point.
+<sup>*</sup>Assumes that you need to scale up or down in either size or SKU at some point.
Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Premium plans, you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /24 and /26 is required for Windows and Linux respectively.
After your app integrates with your virtual network, it uses the same DNS server
## Restrict your storage account to a virtual network > [!NOTE]
-> To quickly deploy a function app with private endpoints enabled on the storage account, please refer to the following template: [Function App with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
+> To quickly deploy a function app with private endpoints enabled on the storage account, please refer to the following template: [Function app with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints.
To learn more about networking and Azure Functions:
<!--Links--> [VNETnsg]: ../virtual-network/network-security-groups-overview.md
-[privateendpoints]: ../app-service/networking/private-endpoint.md
+[privateendpoints]: ../app-service/networking/private-endpoint.md
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Azure Maps is a collection of geospatial services and SDKs that use fresh mappin
* Various routing options; such as point-to-point, multipoint, multipoint optimization, isochrone, electric vehicle, commercial vehicle, traffic influenced, and matrix routing. * Traffic flow view and incidents view, for applications that require real-time traffic information. * Time zone and Geolocation services.
-* Elevation services with Digital Elevation Model
* Geofencing service and mapping data storage, with location information hosted in Azure. * Location intelligence through geospatial analytics.
Maps Creator provides the following
* [Wayfinding service] (preview). Use the [wayfinding API] to generate a path between two points within a facility. Use the [routeset API] to create the data that the wayfinding service needs to generate paths.
-### Elevation service
-
-The Azure Maps Elevation service is a web service that developers can use to retrieve elevation data from anywhere on the EarthΓÇÖs surface.
-
-The Elevation service allows you to retrieve elevation data in two formats:
-
-* **GeoTIFF raster format**. Use the [Render V2-Get Map Tile API](/rest/api/maps/renderv2) to retrieve elevation data in tile format.
-
-* **GeoJSON format**. Use the [Elevation APIs](/rest/api/maps/elevation) to request sampled elevation data along paths, within a defined bounding box, or at specific coordinates.
-- ## Programming model Azure Maps is built for mobility and can help you develop cross-platform applications. It uses a programming model that's language agnostic and supports JSON output through [REST APIs](/rest/api/maps/).
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
Below are the QPS usage limits for each Azure Maps service by Pricing Tier.
| Creator - Alias, TilesetDetails | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, WFS | 50 | Not Available | Not Available | | Data service | 50 | 50 | Not Available |
-| Elevation service | 50 | 50 | Not Available |
+| Elevation service ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) | 50 | 50 | Not Available |
| Geolocation service | 50 | 50 | 50 |
-| Render service - Contour tiles, Digital Elevation Model (DEM) tiles and Customer tiles | 50 | 50 | Not Available |
+| Render service - Contour tiles, Digital Elevation Model (DEM) tiles ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) and Customer tiles | 50 | 50 | Not Available |
| Render service - Traffic tiles and Static maps | 50 | 50 | 50 | | Render service - Road tiles | 500 | 500 | 50 | | Render service - Satellite tiles | 250 | 250 | Not Available |
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
Below is the manifest file for the sample drawing package. Go to the [Sample dra
You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the [sample drawing package v2].
-For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide].
+For a guide on how to prepare your drawing package, see [Drawing Package Guide].
## Changes and Revisions
The JSON in this example shows the manifest file for the sample drawing package.
## Next steps
-For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide].
+For a guide on how to prepare your drawing package, see the drawing package guide.
> [!div class="nextstepaction"] > [Drawing Package Guide]
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
# Java REST SDK Developers Guide (preview)
-The Azure Maps Java SDK can be integrated with Java applications and libraries to build maps-related and location-aware applications. The Azure Maps Java SDK contains APIs for Search, Route, Render, Elevation, Geolocation, Traffic, Timezone, and Weather. These APIs support operations such as searching for an address, routing between different coordinates, obtaining the geo-location of a specific IP address etc.
+The Azure Maps Java SDK can be integrated with Java applications and libraries to build maps-related and location-aware applications. The Azure Maps Java SDK contains APIs for Search, Route, Render, Geolocation, Traffic, Timezone, and Weather. These APIs support operations such as searching for an address, routing between different coordinates, obtaining the geo-location of a specific IP address etc.
> [!NOTE] > Azure Maps Java SDK is baselined on Java 8, with testing and forward support up until the latest Java long-term support release (currently Java 18). For the list of Java versions for download, see [Java Standard Versions].
New-Item demo.java
| [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] | | [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] | | [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] |
-| [Elevation][java elevation readme] | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
+| [Elevation][java elevation readme] ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023))| [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
## Create and authenticate a MapsSearchClient
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-elevation-data.md
Title: Request elevation data using the Azure Maps Elevation service description: Learn how to request elevation data using the Azure Maps Elevation service.--++ Last updated 10/28/2021
# Request elevation data using the Azure Maps Elevation service
+> [!IMPORTANT]
+> The Azure Maps Elevation services and Render V2 DEM tiles have been retired and will no longer be available or supported after May 5, 2023. No other Azure Maps API, services or tilesets are affected. For more information, see [Elevation Services Retirement](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023).
+ The Azure Maps [Elevation service](/rest/api/maps/elevation) provides APIs to query elevation data anywhere on the earth's surface. You can request sampled elevation data along paths, within a defined bounding box, or at specific coordinates. Also, you can use the [Render V2 - Get Map Tile API](/rest/api/maps/renderv2) to retrieve elevation data in tile format. The tiles are delivered in GeoTIFF raster format. This article describes how to use Azure Maps Elevation service and the Get Map Tile API to request elevation data. The elevation data can be requested in both GeoJSON and GeoTiff formats. ## Prerequisites
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
# Tutorial: Migrate web service from Bing Maps
-Both Azure and Bing Maps provide access to spatial APIs through REST web services. The API interfaces for these platforms perform similar functionalities but use different naming conventions and response objects. In this tutorial, you will learn how to:
+Both Azure and Bing Maps provide access to spatial APIs through REST web services. The API interfaces for these platforms perform similar functionalities but use different naming conventions and response objects. This tutorial demonstrates how to:
> * Forward and reverse geocoding > * Search for points of interest
Both Azure and Bing Maps provide access to spatial APIs through REST web service
The following table provides the Azure Maps service APIs that provide similar functionality to the listed Bing Maps service APIs.
-| Bing Maps service API | Azure Maps service API |
-||--|
-| Autosuggest | [Search](/rest/api/maps/search) |
-| Directions (including truck) | [Route directions](/rest/api/maps/route/getroutedirections) |
-| Distance Matrix | [Route Matrix](/rest/api/maps/route/postroutematrixpreview) |
-| Imagery ΓÇô Static Map | [Render](/rest/api/maps/render/getmapimage) |
-| Isochrones | [Route Range](/rest/api/maps/route/getrouterange) |
-| Local Insights | [Search](/rest/api/maps/search) + [Route Range](/rest/api/maps/route/getrouterange) |
-| Local Search | [Search](/rest/api/maps/search) |
-| Location Recognition (POIs) | [Search](/rest/api/maps/search) |
-| Locations (forward/reverse geocoding) | [Search](/rest/api/maps/search) |
-| Snap to Road | [POST Route directions](/rest/api/maps/route/postroutedirections) |
-| Spatial Data Services (SDS) | [Search](/rest/api/maps/search) + [Route](/rest/api/maps/route) + other Azure Services |
-| Time Zone | [Time Zone](/rest/api/maps/timezone) |
-| Traffic Incidents | [Traffic Incident Details](/rest/api/maps/traffic/gettrafficincidentdetail) |
-| Elevation | [Elevation](/rest/api/maps/elevation)
-
-The following service APIs are not currently available in Azure Maps:
+| Bing Maps service API | Azure Maps service API |
+||-|
+| Autosuggest | [Search] |
+| Directions (including truck) | [Route directions] |
+| Distance Matrix | [Route Matrix] |
+| Imagery ΓÇô Static Map | [Render] |
+| Isochrones | [Route Range] |
+| Local Insights | [Search] + [Route Range] |
+| Local Search | [Search] |
+| Location Recognition (POIs) | [Search] |
+| Locations (forward/reverse geocoding) | [Search] |
+| Snap to Road | [POST Route directions] |
+| Spatial Data Services (SDS) | [Search] + [Route] + other Azure Services |
+| Time Zone | [Time Zone] |
+| Traffic Incidents | [Traffic Incident Details] |
+| Elevations | <sup>1</sup> |
+
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+
+The following service APIs aren't currently available in Azure Maps:
* Optimized Itinerary Routes - Planned. Azure Maps Route API does support traveling salesmen optimization for a single vehicle. * Imagery Metadata ΓÇô Primarily used for getting tile URLs in Bing Maps. Azure Maps has a standalone service for directly accessing map tiles.
+* Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md)
-Azure Maps has several additional REST web services that may be of interest;
+Azure Maps also has these REST web
-* [Azure Maps Creator](./creator-indoor-maps.md) ΓÇô Create a custom private digital twin of buildings and spaces.
-* [Spatial operations](/rest/api/maps/spatial) ΓÇô Offload complex spatial calculations and operations, such as geofencing, to a service.
-* [Map Tiles](/rest/api/maps/render/getmaptile) ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles.
-* [Batch routing](/rest/api/maps/route/postroutedirectionsbatchpreview) ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing.
-* [Traffic](/rest/api/maps/traffic) Flow ΓÇô Access real-time traffic flow data as both raster and vector tiles.
-* [Geolocation API](/rest/api/maps/geolocation/get-ip-to-location) ΓÇô Get the location of an IP address.
-* [Weather services](/rest/api/maps/weather) ΓÇô Gain access to real-time and forecast weather data.
+* [Azure Maps Creator] ΓÇô Create a custom private digital twin of buildings and spaces.
+* [Spatial operations] ΓÇô Offload complex spatial calculations and operations, such as geofencing, to a service.
+* [Map Tiles] ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles.
+* [Batch routing] ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing.
+* [Traffic] Flow ΓÇô Access real-time traffic flow data as both raster and vector tiles.
+* [Geolocation API] ΓÇô Get the location of an IP address.
+* [Weather services] ΓÇô Gain access to real-time and forecast weather data.
Be sure to also review the following best practices guides:
-* [Best practices for search](./how-to-use-best-practices-for-search.md)
-* [Best practices for routing](./how-to-use-best-practices-for-routing.md)
+* [Best practices for Azure Maps Search service]
+* [Best practices for Azure Maps Route service]
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+If you don't have an Azure subscription, create a [free account] before you begin.
* An [Azure Maps account] * A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Geocoding addresses Geocoding is the process of converting an address (like `"1 Microsoft way, Redmond, WA"`) into a coordinate (like longitude: -122.1298, latitude: 47.64005). Coordinates are then often used to position a pushpin on a map or center a map.
-Azure Maps provides several methods for geocoding addresses;
+Azure Maps provides several methods for geocoding addresses:
-* [Free-form address geocoding](/rest/api/maps/search/getsearchaddress): Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding](/rest/api/maps/search/getsearchaddressstructured): Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* [Batch address geocoding](/rest/api/maps/search/postsearchaddressbatchpreview): Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses will be geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
-* [Fuzzy search](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Fuzzy batch search](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
+* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following tables cross-reference the Bing Maps API parameters with the comparable API parameters in Azure Maps for structured and free-form address geocoding. **Location by Address (structured address)**
-| Bing Maps API parameter | Comparable Azure Maps API parameter |
-|--|--|
-| `addressLine` | `streetNumber`, `streetName` or `crossStreet` |
-| `adminDistrict` | `countrySubdivision` |
-| `countryRegion` | `country` and `countryCode` |
-| `locality` | `municipality` or `municipalitySubdivision` |
-| `postalCode` | `postalCode` |
-| `maxResults` (`maxRes`) | `limit` |
+| Bing Maps API parameter | Comparable Azure Maps API parameter |
+|-|--|
+| `addressLine` | `streetNumber`, `streetName` or `crossStreet` |
+| `adminDistrict` | `countrySubdivision` |
+| `countryRegion` | `country` and `countryCode` |
+| `locality` | `municipality` or `municipalitySubdivision` |
+| `postalCode` | `postalCode` |
+| `maxResults` (`maxRes`) | `limit` |
| `includeNeighborhood` (`inclnb`) | N/A ΓÇô Always returned by Azure Maps if available. | | `include` (`incl`) | N/A ΓÇô Country ISO2 Code always returned by Azure Maps. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-Azure Maps also supports;
+Azure Maps also supports:
* `countrySecondarySubdivision` ΓÇô County, districts
-* `countryTertiarySubdivision` - Named areas; boroughs, cantons, communes
+* `countryTertiarySubdivision` - Named areas, boroughs, cantons, communes
* `ofs` - Page through the results in combination with `maxResults` parameter. **Location by Query (free-form address string)**
-| Bing Maps API parameter | Comparable Azure Maps API parameter |
-|--||
-| `query` | `query` |
-| `maxResults` (`maxRes`) | `limit` |
+| Bing Maps API parameter | Comparable Azure Maps API parameter |
+|-||
+| `query` | `query` |
+| `maxResults` (`maxRes`) | `limit` |
| `includeNeighborhood` (`inclnb`) | N/A ΓÇô Always returned by Azure Maps if available. | | `include` (`incl`) | N/A ΓÇô Country ISO2 Code always returned by Azure Maps. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-Azure Maps also supports;
+Azure Maps also supports:
-* `typeahead` - Species if the query will be interpreted as a partial input and the search will enter predictive mode (autosuggest/autocomplete).
+* `typeahead` - Specifies if the query is interpreted as a partial input and the search enters predictive mode (autosuggest/autocomplete).
* `countrySet` ΓÇô A comma-separated list of ISO2 countries codes in which to limit the search to. * `lat`/`lon`, `topLeft`/`btmRight`, `radius` ΓÇô Specify user location and area to make the results more locally relevant. * `ofs` - Page through the results in combination with `maxResults` parameter.
-An example of how to use the search service is documented [here](./how-to-search-for-address.md). Be sure to review the [best practices for search](./how-to-use-best-practices-for-search.md) documentation.
+For more information on using the search service, see [Search for a location using Azure Maps Search services] and [Best practices for Azure Maps Search service].
## Reverse geocode a coordinate (Find a Location by Point) Reverse geocoding is the process of converting geographic coordinates (like longitude: -122.1298, latitude: 47.64005) into its approximate address (like `"1 Microsoft way, Redmond, WA"`).
-Azure Maps provides several reverse geocoding methods;
+Azure Maps provides several reverse geocoding methods:
-* [Address reverse geocoder](/rest/api/maps/search/getsearchaddressreverse): Specify a single geographic coordinate to get its approximate address and process the request immediately.
-* [Cross street reverse geocoder](/rest/api/maps/search/getsearchaddressreversecrossstreet): Specify a single geographic coordinate to get nearby cross street information (for example, 1st & main) and process the request immediately.
-* [Batch address reverse geocoder](/rest/api/maps/search/postsearchaddressreversebatchpreview): Create a request containing up to 10,000 coordinates and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [Address reverse geocoder]: Specify a single geographic coordinate to get its approximate address and process the request immediately.
+* [Cross street reverse geocoder]: Specify a single geographic coordinate to get nearby cross street information (for example, 1st & main) and process the request immediately.
+* [Batch address reverse geocoder]: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
-| Bing Maps API parameter | Comparable Azure Maps API parameter |
-|--|-|
-| `point` | `query` |
-| `includeEntityTypes` | `entityType` ΓÇô See entity type comparison table below. |
-| `includeNeighborhood` (`inclnb`) | N/A ΓÇô Always returned by Azure Maps if available. |
-| `include` (`incl`) | N/A ΓÇô Country ISO2 Code always returned by Azure Maps. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| Bing Maps API parameter | Comparable Azure Maps API parameter |
+|--|-|
+| `point` | `query` |
+| `includeEntityTypes` | `entityType` ΓÇô See entity type comparison table below.|
+| `includeNeighborhood` (`inclnb`) | N/A ΓÇô Always returned by Azure Maps if available. |
+| `include` (`incl`) | N/A ΓÇô Country ISO2 Code always returned by Azure Maps.|
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-Be sure to review the [best practices for search](./how-to-use-best-practices-for-search.md) documentation.
+For more information on searching in Azure Maps, see [Best practices for Azure Maps Search service].
-The Azure Maps reverse geocoding API has some additional features not available in Bing Maps that might be useful to integrate when migrating your app:
+The Azure Maps reverse geocoding API has features not available in Bing Maps that might be useful to integrate when migrating your app:
* Retrieve speed limit data.
-* Retrieve road use information; local road, arterial, limited access, ramp, etc.
+* Retrieve road use information, local road, arterial, limited access, ramp, etc.
* The side of street the coordinate falls on. **Entity type comparison table** The following table cross references the Bing Maps entity type values to the equivalent property names in Azure Maps.
-| Bing Maps Entity Type | Comparable Azure Maps Entity type | Description |
-|--|-|--|
-| `Address` | | *Address* |
-| `Neighborhood` | `Neighbourhood` | *Neighborhood* |
-| `PopulatedPlace` | `Municipality` or `MunicipalitySubdivision` | *City*, *Town or Sub*, or *Super City* |
-| `Postcode1` | `PostalCodeArea` | *Postal Code* or *Zip Code* |
-| `AdminDivision1` | `CountrySubdivision` | *State* or *Province* |
-| `AdminDivision2` | `CountrySecondarySubdivison` | *County* or *districts* |
-| `CountryRegion` | `Country` | *Country name* |
-| | `CountryTertiarySubdivision` | *Boroughs*, *Cantons*, *Communes* |
+| Bing Maps Entity Type | Comparable Azure Maps Entity type | Description |
+|--||-|
+| `Address` | | *Address* |
+| `Neighborhood` | `Neighbourhood` | *Neighborhood* |
+| `PopulatedPlace` | `Municipality` or `MunicipalitySubdivision` | *City*, *Town or Sub*, or *Super City* |
+| `Postcode1` | `PostalCodeArea` | *Postal Code* or *Zip Code* |
+| `AdminDivision1` | `CountrySubdivision` | *State* or *Province* |
+| `AdminDivision2` | `CountrySecondarySubdivison` | *County* or *districts* |
+| `CountryRegion` | `Country` | *Country name* |
+| | `CountryTertiarySubdivision` | *Boroughs*, *Cantons*, *Communes* |
## Get location suggestions (Autosuggest)
-Several of the Azure Maps search APIΓÇÖs support predictive mode that can be used for autosuggest scenarios. The Azure Maps [fuzzy search](/rest/api/maps/search/getsearchfuzzy) API is the most like the Bing Maps Autosuggest API. The following APIΓÇÖs also support predictive mode, add `&typeahead=true` to the query;
+Several of the Azure Maps search APIΓÇÖs support predictive mode that can be used for autosuggest scenarios. The Azure Maps [fuzzy search] API is the most like the Bing Maps Autosuggest API. The following APIs also support predictive mode, add `&typeahead=true` to the query:
-* [Free-form address geocoding](/rest/api/maps/search/getsearchaddress): Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Fuzzy search](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [POI search](/rest/api/maps/search/getsearchpoi): Search for points of interests by name. For example; `"starbucks"`.
-* [POI category search](/rest/api/maps/search/getsearchpoicategory): Search for points of interests by category. For example; "restaurant".
+* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [POI search]: Search for points of interests by name. For example, `"starbucks"`.
+* [POI category search]: Search for points of interests by category. For example, "restaurant".
## Calculate routes and directions
-Azure Maps can be used to calculate routes and directions. Azure Maps has many of the same functionalities as the Bing Maps routing service, such as;
+Azure Maps can be used to calculate routes and directions. Azure Maps has many of the same functionalities as the Bing Maps routing service, such as:
* arrival and departure times * real-time and predictive based traffic routes
-* different modes of transportation; driving, walking, truck
+* different modes of transportation, driving, walking, truck
* waypoint order optimization (traveling salesmen) > [!NOTE] > Azure Maps requires all waypoints to be coordinates. Addresses will need to be geocoded first.
-The Azure Maps routing service provides the following APIs for calculating routes;
+The Azure Maps routing service provides the following APIs for calculating routes:
-* [Calculate route](/rest/api/maps/route/getroutedirections): Calculate a route and have the request processed immediately. This API supports both GET and POST requests. POST requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues.
-* [Batch route](/rest/api/maps/route/postroutedirectionsbatchpreview): Create a request containing up to 1,000 route request and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [Calculate route]: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues.
+* [Batch route]: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
-| Bing Maps API parameter | Comparable Azure Maps API parameter |
-|||
-| `avoid` | `avoid` |
-| `dateTime` (`dt`) | `departAt` or `arriveAt` |
-| `distanceBeforeFirstTurn` (`dbft`) | N/A |
-| `distanceUnit` (`du`) | N/A ΓÇô Azure Maps only uses the metric system. |
-| `heading` (`hd`) | `vehicleHeading` |
-| `maxSolutions` (`maxSolns`) | `maxAlternatives`, `alternativeType`, `minDeviationDistance`, and `minDeviationTime` |
-| `optimize` (`optwz`) | `routeType` and `traffic` |
-| `optimizeWaypoints` (`optWp`) | `computeBestOrder` |
-| `routeAttributes` (`ra`) | `instructionsType` |
-| `routePathOutput` (`rpo`) | `routeRepresentation` |
-| `timeType` (`tt`) | `departAt` or `arriveAt` |
-| `tolerances` (`tl`) | N/A |
-| `travelMode` | `travelMode` |
-| `waypoint.n` (`wp.n`) or `viaWaypoint.n` (`vwp.n`) | `query` – coordinates in the format `lat0,lon0:lat1,lon1….` |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| Bing Maps API parameter | Comparable Azure Maps API parameter |
+|-||
+| `avoid` | `avoid` |
+| `dateTime` (`dt`) | `departAt` or `arriveAt` |
+| `distanceBeforeFirstTurn` (`dbft`) | N/A |
+| `distanceUnit` (`du`) | N/A ΓÇô Azure Maps only uses the metric system. |
+| `heading` (`hd`) | `vehicleHeading` |
+| `maxSolutions` (`maxSolns`) | `maxAlternatives`, `alternativeType`, `minDeviationDistance`, and `minDeviationTime` |
+| `optimize` (`optwz`) | `routeType` and `traffic` |
+| `optimizeWaypoints` (`optWp`) | `computeBestOrder` |
+| `routeAttributes` (`ra`) | `instructionsType` |
+| `routePathOutput` (`rpo`) | `routeRepresentation` |
+| `timeType` (`tt`) | `departAt` or `arriveAt` |
+| `tolerances` (`tl`) | N/A |
+| `travelMode` | `travelMode` |
+| `waypoint.n` (`wp.n`) or `viaWaypoint.n` (`vwp.n`) | `query` – coordinates in the format `lat0,lon0:lat1,lon1….` |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
The Azure Maps routing API also supports truck routing within the same API. The following table cross-references the additional Bing Maps truck routing parameters with the comparable API parameters in Azure Maps.
The Azure Maps routing API also supports truck routing within the same API. The
> [!TIP] > By default, the Azure Maps route API only returns a summary (distance and times) and the coordinates for the route path. Use the `instructionsType` parameter to retrieve turn-by-turn instructions. The `routeRepresentation` parameter can be used to filter out the summary and route path.
-Be sure to also review the [Best practices for routing](./how-to-use-best-practices-for-routing.md) documentation.
+For more information on the Azure Maps route API, see [Best practices for Azure Maps Route service].
-The Azure Maps routing API has many additional features not available in Bing Maps that might be useful to integrate when migrating your app:
+The Azure Maps routing API has features not available in Bing Maps that might be useful to integrate when migrating your app:
* Support for route type: shortest, fastest, trilling, and most fuel efficient.
-* Support for additional travel modes: bicycle, bus, motorcycle, taxi, truck, and van.
+* Support for more travel modes: bicycle, bus, motorcycle, taxi, truck, and van.
* Support for 150 waypoints.
-* Compute multiple travel times in a single request; historic traffic, live traffic, no traffic.
+* Compute multiple travel times in a single request, historic traffic, live traffic, no traffic.
* Avoid additional road types: carpool roads, unpaved roads, already used roads. * Engine specification-based routing. Calculate routes for combustion or electric vehicles based on their remaining fuel/charge and engine specifications. * Specify maximum vehicle speed.
There are several ways to snap coordinates to roads in Azure Maps.
**Using the route direction API to snap coordinates**
-Azure Maps can snap coordinates to roads by using the [route directions](/rest/api/maps/route/postroutedirections) API. This service can be used to reconstruct a logical route between a set of coordinates and is comparable to the Bing Maps Snap to Road API.
+Azure Maps can snap coordinates to roads by using the [route directions] API. This service can be used to reconstruct a logical route between a set of coordinates and is comparable to the Bing Maps Snap to Road API.
There are two different ways to use the route directions API to snap coordinates to roads.
-* If there are 150 coordinates or less, they can be passed as waypoints in the GET route directions API. Using this approach two different types of snapped data can be retrieved; route instructions will contain the individual snapped waypoints, while the route path will have an interpolated set of coordinates that fill the full path between the coordinates.
-* If there are more than 150 coordinates, the POST route directions API can be used. The coordinates start and end coordinates have to be passed into the query parameter, but all coordinates can be passed into the `supportingPoints` parameter in the body of the POST request and formatted a GeoJSON geometry collection of points. The only snapped data available using this approach will be the route path that is an interpolated set of coordinates that fill the full path between the coordinates. [Here is an example](https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path) of this approach using the services module in the Azure Maps Web SDK.
+* If there are 150 coordinates or less, they can be passed as waypoints in the `GET` route directions API. Using this approach two different types of snapped data can be retrieved; route instructions contain the individual snapped waypoints, while the route path has an interpolated set of coordinates that fill the full path between the coordinates.
+* If there are more than 150 coordinates, the `POST` route directions API can be used. The coordinates start and end coordinates have to be passed into the query parameter, but all coordinates can be passed into the `supportingPoints` parameter in the body of the `POST` request and formatted a GeoJSON geometry collection of points. The only snapped data available using this approach is the route path that is an interpolated set of coordinates that fill the full path between the coordinates. To see an example of this approach using the services module in the Azure Maps Web SDK, see the [Snap points to logical route path] sample in the Azure Maps samples.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps. | Bing Maps API parameter | Comparable Azure Maps API parameter | |-||
-| `points` | `supportingPoints` ΓÇô pass these points into the body of the post request |
+| `points` | `supportingPoints` ΓÇô pass these points into the body of the `POST` request |
| `interpolate` | N/A | | `includeSpeedLimit` | N/A | | `includeTruckSpeedLimit` | N/A | | `speedUnit` | N/A | | `travelMode` | `travelMode` |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
The Azure Maps routing API also supports truck routing parameter within the same API to ensure logical paths are calculated. The following table cross-references the additional Bing Maps truck routing parameters with the comparable API parameters in Azure Maps.
The Azure Maps routing API also supports truck routing parameter within the same
Since this approach uses the route directions API, the full set of options in that service can be used to customize the logic used to snap the coordinate to roads. For example, specifying a departure time would result in historic traffic data being taken into consideration.
-The Azure Maps route directions API does not currently return speed limit data, however that can be retrieved using the Azure Maps reverse geocoding API.
+The Azure Maps route directions API doesn't currently return speed limit data, however that can be retrieved using the Azure Maps reverse geocoding API.
**Using the Web SDK to snap coordinates**
-The Azure Maps Web SDK uses vector tiles to render the maps. These vector tiles contain the raw road geometry information and can be used to calculate the nearest road to a coordinate for simple snapping of individual coordinates. This is useful when you want the coordinates to visually appear over roads and you are already using the Azure Maps Web SDK to visualize the data.
+The Azure Maps Web SDK uses vector tiles to render the maps. These vector tiles contain the raw road geometry information and can be used to calculate the nearest road to a coordinate for simple snapping of individual coordinates. This is useful when you want the coordinates to visually appear over roads and you're already using the Azure Maps Web SDK to visualize the data.
-This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. [Here is a code sample](https://samples.azuremaps.com/?sample=basic-snap-to-road-logic) that demonstrates this.
+This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. To see a fully functional example of this snapping logic, see the [Basic snap to road logic] sample in the Azure Maps samples.
**Using the Azure Maps vector tiles directly to snap coordinates**
-The Azure Maps vector tiles contain the raw road geometry data that can be used to calculate the nearest point on a road to a coordinate to do basic snapping of individual coordinates. All road segments appear in the sectors at zoom level 15, so you will want to retrieve tiles from there. You can then use the [quadtree tile pyramid math](./zoom-levels-and-tile-grid.md) to determine that tiles are needed and convert the tiles to geometries. From there a spatial math library, such as [turf js](https://turfjs.org/) or [NetTopologySuite](https://github.com/NetTopologySuite/NetTopologySuite) can be used to calculate the closest line segments.
+The Azure Maps vector tiles contain the raw road geometry data that can be used to calculate the nearest point on a road to a coordinate to do basic snapping of individual coordinates. All road segments appear in the sectors at zoom level 15, so you want to retrieve tiles from there. You can then use the [quadtree tile pyramid math] to determine that tiles are needed and convert the tiles to geometries. From there a spatial math library, such as [turf js] or [NetTopologySuite] can be used to calculate the closest line segments.
## Retrieve a map image (Static Map)
-Azure Maps provides an API for rendering the static map images with data overlaid. The Azure Maps [Map image render](/rest/api/maps/render/getmapimagerytile) API is comparable to the static map API in Bing Maps.
+Azure Maps provides an API for rendering the static map images with data overlaid. The Azure Maps [Map image render] API is comparable to the static map API in Bing Maps.
> [!NOTE]
-> Azure Maps requires the center, all pushpins and path locations to be coordinates in `longitude,latitude` format whereas Bing Maps uses the `latitude,longitude` format.</p>
-<p>Addresses will need to be geocoded first.
+> Azure Maps requires the center, all pushpins and path locations to be coordinates in `longitude,latitude` format whereas Bing Maps uses the `latitude,longitude` format. Addresses will need to be geocoded first.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross-references the Bing Maps API parameters with the compa
| `centerPoint` | `center` | | `format` | `format` ΓÇô specified as part of URL path. Currently only PNG supported. | | `heading` | N/A ΓÇô Streetside not supported. |
-| `imagerySet` | `layer` and `style` ΓÇô See [Supported map styles](./supported-map-styles.md) documentation. |
+| `imagerySet` | `layer` and `style` ΓÇô For more information, see [Supported map styles].|
| `mapArea` (`ma`) | `bbox` | | `mapLayer` (`ml`) | N/A | | `mapSize` (`ms`) | `width` and `height` ΓÇô can be up to 8192x8192 in size. |
The following table cross-references the Bing Maps API parameters with the compa
| `highlightEntity` (`he`) | N/A | | `style` | N/A | | route parameters | N/A |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
> [!NOTE] > Azure Maps uses a tile system with tiles that are twice the size of the map tiles used in Bing Maps. As such, the zoom level value in Azure Maps will appear one zoom level closer in Azure Maps compared to Bing Maps. Lower the zoom level in the requests you are migrating by 1 to compensate for this.
-See the [How-to guide on the map image render API](./how-to-render-custom-data.md) for more information.
+For more information, see [Render custom data on a raster map].
-In addition to being able to generate a static map image, the Azure Maps render service also provides the ability to directly access map tiles in raster (PNG) and vector format;
+In addition to being able to generate a static map image, the Azure Maps render service also enables direct access to map tiles in raster (PNG) and vector format:
-* [Map tile](/rest/api/maps/render/getmaptile) ΓÇô Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
-* [Map imagery tile](/rest/api/maps/render/getmapimagerytile) ΓÇô Retrieve aerial and satellite imagery tiles.
+* [Map tiles] ΓÇô Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
+* [Map imagery tile] ΓÇô Retrieve aerial and satellite imagery tiles.
### Pushpin URL parameter format comparison
In Bing Maps, pushpins can be added to a static map image by using the `pushpin`
> `&pushpin=latitude,longitude;iconStyle;label`
-Additional pushpins can be added by adding additional `pushpin` parameters to the URL with a different set of values. Pushpin icon styles are limited to one of the predefined styles available in the Bing Maps API.
+Pushpins can be added by adding more `pushpin` parameters to the URL with a different set of values. Pushpin icon styles are limited to one of the predefined styles available in the Bing Maps API.
For example, in Bing Maps, a red pushpin with the label "AB" can be added to the map at coordinates (longitude: -110, latitude: 45) with the following URL parameter:
In Azure Maps, pushpins can also be added to a static map image by specifying th
> `&pins=iconType|pinStyles||pinLocation1|pinLocation2|...`
-Additional styles can be used by adding additional `pins` parameters to the URL with a different style and set of locations.
+Additional styles can be used by adding more `pins` parameters to the URL with a different style and set of locations.
-When it comes to pin locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma** separating longitude and latitude in Azure Maps.
+Regarding pin locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma** separating longitude and latitude in Azure Maps.
The `iconType` value specifies the type of pin to create and can have the following values: * `default` ΓÇô The default pin icon.
-* `none` ΓÇô No icon is displayed, only labels will be rendered.
+* `none` ΓÇô No icon is displayed, only labels are rendered.
* `custom` ΓÇô Specifies a custom icon is to be used. A URL pointing to the icon image can be added to the end of the `pins` parameter after the pin location information. * `{udid}` ΓÇô A Unique Data ID (UDID) for an icon stored in the Azure Maps Data Storage platform.
-Pin styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `iconType|optionName1Value1|optionName2Value2`. Note the option names and values are not separated. The following style option names can be used to style pushpins in Azure Maps:
+Pin styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `iconType|optionName1Value1|optionName2Value2`. Note the option names and values aren't separated. The following style option names can be used to style pushpins in Azure Maps:
* `al` ΓÇô Specifies the opacity (alpha) of the pushpins. Can be a number between 0 and 1. * `an` ΓÇô Specifies the pin anchor. X and y pixel values specified in the format `x y`.
In Bing Maps, lines, and polygons can be added to a static map image by using th
> `&drawCurve=shapeType,styleType,location1,location2...`
-Additional styles can be used by adding additional `drawCurve` parameters to the URL with a different style and set of locations.
+More styles can be used by adding additional `drawCurve` parameters to the URL with a different style and set of locations.
Locations in Bing Maps are specified with the format `latitude1,longitude1_latitude2,longitude2_…`. Locations can also be encoded.
In Azure Maps, lines and polygons can also be added to a static map image by spe
> `&path=pathStyles||pathLocation1|pathLocation2|...`
-When it comes to path locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma separating** longitude and latitude in Azure Maps. Azure Maps does not support encoded paths currently. Larger data sets can be uploaded as a GeoJSON fills into the Azure Maps Data Storage API as documented [here](./how-to-render-custom-data.md#upload-pins-and-path-data).
+When it comes to path locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma separating** longitude and latitude in Azure Maps. Azure Maps doesn't support encoded paths currently. Larger data sets can be uploaded as a GeoJSON fills into the Azure Maps Data Storage API. For more information, see [Upload pins and path data](./how-to-render-custom-data.md#upload-pins-and-path-data).
-Path styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `optionName1Value1|optionName2Value2`. Note the option names and values are not separated. The following style option names can be used to style paths in Azure Maps:
+Path styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `optionName1Value1|optionName2Value2`. Note the option names and values aren't separated. The following style option names can be used to style paths in Azure Maps:
* `fa` ΓÇô The fill color opacity (alpha) used when rendering polygons. Can be a number between 0 and 1. * `fc` ΓÇô The fill color used to render the area of a polygon.
For example, in Azure Maps, a blue line with 50% opacity and a thickness of four
## Calculate a distance matrix
-Azure Maps provides an API for calculating the travel times and distances between a set of locations as a distance matrix. The Azure Maps distance matrix API is comparable to the distance matrix API in Bing Maps;
+Azure Maps provides an API for calculating the travel times and distances between a set of locations as a distance matrix. The Azure Maps distance matrix API is comparable to the distance matrix API in Bing Maps:
-* [Route matrix](/rest/api/maps/route/postroutematrixpreview): Asynchronously calculates travel times and distances for a set of origins and destinations. Up to 700 cells per request is supported (the number of origins multiplied by the number of destinations). With that constraint in mind, examples of possible matrix dimensions are: `700x1`, `50x10`, `10x10`, `28x25`, `10x70`.
+* [Route matrix]: Asynchronously calculates travel times and distances for a set of origins and destinations. Up to 700 cells per request is supported (the number of origins multiplied by the number of destinations). With that constraint in mind, examples of possible matrix dimensions are: `700x1`, `50x10`, `10x10`, `28x25`, `10x70`.
> [!NOTE]
-> A request to the distance matrix API can only be made using a POST request with the origin and destination information in the body of the request.</p>
-<p>Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
+> A request to the distance matrix API can only be made using a `POST` request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps. | Bing Maps API parameter | Comparable Azure Maps API parameter | |-|-|
-| `origins` | `origins` ΓÇô specify in the POST request body as GeoJSON. |
-| `destinations` | `destination` ΓÇô specify in the POST request body as GeoJSON. |
+| `origins` | `origins` ΓÇô specify in the `POST` request body as GeoJSON. |
+| `destinations` | `destination` ΓÇô specify in the `POST` request body as GeoJSON.|
| `endTime` | `arriveAt` | | `startTime` | `departAt` | | `travelMode` | `travelMode` | | `resolution` | N/A | | `distanceUnit` | N/A ΓÇô All distances in meters. | | `timeUnit` | N/A ΓÇô All times in seconds. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
> [!TIP] > All the advanced routing options available in the Azure Maps routing API (truck routing, engine specifications, avoid…) is support in the Azure Maps distance matrix API. ## Calculate an isochrone
-Azure Maps provides an API for calculating an isochrone, a polygon covering an area that can be traveled to in any direction from an origin point within a specified amount of time or amount of fuel/charge. The Azure Maps route range API is comparable to the isochrone API in Bing Maps;
+Azure Maps provides an API for calculating an isochrone, a polygon covering an area that can be traveled to in any direction from an origin point within a specified amount of time or amount of fuel/charge. The Azure Maps route range API is comparable to the isochrone API in Bing Maps.
-* [Route](/rest/api/maps/route/getrouterange) Range**: Calculate a polygon covering an area that can be traveled to in any direction from an origin point within a specified amount of time, distance, or amount of fuel/charge available.
+* [Route] Range: Calculate a polygon covering an area that can be traveled to in any direction from an origin point within a specified amount of time, distance, or amount of fuel/charge available.
> [!NOTE] > Azure Maps requires the query origin to be a coordinate. Addresses will need to be geocoded first.</p>
The following table cross-references the Bing Maps API parameters with the compa
| `maxDistance` (`maxDis`) | `distanceBudgetInMeters` | | `distanceUnit` (`du`) | N/A ΓÇô All distances in meters. | | `optimize` (`optmz`) | `routeType` |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
> [!TIP] > All the advanced routing options available in the Azure Maps routing API (truck routing, engine specifications, avoid…) is support in the Azure Maps isochrone API.
The following table cross-references the Bing Maps API parameters with the compa
Point of interest data can be searched in Bing Maps by using the following APIs:
-* **Local search:** Searches for points of interest that are nearby (radial search), by name, or by entity type (category). The Azure Maps [POI search](/rest/api/maps/search/getsearchpoi) and [POI category search](/rest/api/maps/search/getsearchpoicategory) APIs are most like this API.
-* **Location recognition**: Searches for points of interests that are within a certain distance of a location. The Azure Maps [nearby search](/rest/api/maps/search/getsearchnearby) API is most like this API.
-* **Local insights:** Searches for points of interests that are within a specified maximum driving time or distance from a specific coordinate. This is achievable with Azure Maps by first calculating an isochrone and then passing it into the [search within geometry](/rest/api/maps/search/postsearchinsidegeometry) API.
+* **Local search**: Searches for points of interest that are nearby (radial search), by name, or by entity type (category). The Azure Maps [POI search] and [POI category search] APIs are most like this API.
+* **Location recognition**: Searches for points of interests that are within a certain distance of a location. The Azure Maps [nearby search] API is most like this API.
+* **Local insights**: Searches for points of interests that are within a specified maximum driving time or distance from a specific coordinate. This is achievable with Azure Maps by first calculating an isochrone and then passing it into the [Search within geometry] API.
Azure Maps provides several search APIs for points of interest:
-* [POI search](/rest/api/maps/search/getsearchpoi): Search for points of interests by name. For example; `"starbucks"`.
-* [POI category search](/rest/api/maps/search/getsearchpoicategory): Search for points of interests by category. For example; "restaurant".
-* [Nearby search](/rest/api/maps/search/getsearchnearby): Searches for points of interests that are within a certain distance of a location.
-* [Fuzzy search](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Search within geometry](/rest/api/maps/search/postsearchinsidegeometry): Search for points of interests that are within a specified geometry (polygon).
-* [Search along route](/rest/api/maps/search/postsearchalongroute): Search for points of interests that are along a specified route path.
-* [Fuzzy batch search](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [POI search]: Search for points of interests by name. For example, `"starbucks"`.
+* [POI category search]: Search for points of interests by category. For example, "restaurant".
+* [Search within geometry]: Searches for points of interests that are within a certain distance of a location.
+* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Search within geometry]: Search for points of interests that are within a specified geometry (polygon).
+* [Search along route]: Search for points of interests that are along a specified route path.
+* [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
-Be sure to review the [best practices for search](./how-to-use-best-practices-for-search.md) documentation.
+For more information on searching in Azure Maps, see [Best practices for Azure Maps Search service].
## Get traffic incidents
-Azure Maps provides several APIs for retrieving traffic data. There are two types of traffic data available;
+Azure Maps provides several APIs for retrieving traffic data. There are two types of traffic data available:
* **Flow data** ΓÇô provides metrics on the flow of traffic on sections of roads. This is often used to color code roads. This data is updated every 2 minutes. * **Incident data** ΓÇô provides data on construction, road closures, accidents, and other incidents that may affect traffic. This data is updated every minute. Bing Maps provides traffic flow and incident data in its interactive map controls, and also make incident data available as a service.
-Traffic data is also integrated into the Azure Maps interactive map controls. Azure maps also provides the following traffic services APIs;
+Traffic data is also integrated into the Azure Maps interactive map controls. Azure maps also provides the following traffic services APIs:
-* [Traffic flow segments](/rest/api/maps/traffic/gettrafficflowsegment): Provides information about the speeds and travel times of the road fragment closest to the given coordinates.
-* [Traffic flow tiles](/rest/api/maps/traffic/gettrafficflowtile): Provides raster and vector tiles containing traffic flow data. These
+* [Traffic flow segments]: Provides information about the speeds and travel times of the road fragment closest to the given coordinates.
+* [Traffic flow tiles]: Provides raster and vector tiles containing traffic flow data. These
can be used with the Azure Maps controls or in third-party map controls such as Leaflet. The vector tiles can also be used for advanced data analysis.
-* [Traffic incident details](/rest/api/maps/traffic/gettrafficincidentdetail): Provides traffic incident details that are within a bounding box, zoom level, and traffic model.
-* [Traffic incident tiles](/rest/api/maps/traffic/gettrafficincidenttile): Provides raster and vector tiles containing traffic incident data.
-* [Traffic incident viewport](/rest/api/maps/traffic/gettrafficincidentviewport): Retrieves the legal and technical information for the viewport described in the request, such as the traffic model ID.
+* [Traffic incident details]: Provides traffic incident details that are within a bounding box, zoom level, and traffic model.
+* [Traffic incident tiles]: Provides raster and vector tiles containing traffic incident data.
+* [Traffic incident viewport]: Retrieves the legal and technical information for the viewport described in the request, such as the traffic model ID.
The following table cross-references the Bing Maps traffic API parameters with the comparable traffic incident details API parameters in Azure Maps.
The following table cross-references the Bing Maps traffic API parameters with t
| `includeLocationCodes` | N/A | | `severity` (`s`) | N/A ΓÇô all data returned | | `type` (`t`) | N/A ΓÇô all data returned |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
## Get a time zone
-Azure Maps provides an API for retrieving the time zone a coordinate is in. The Azure Maps time zone API is comparable to the time zone API in Bing Maps;
+Azure Maps provides an API for retrieving the time zone a coordinate is in. The Azure Maps time zone API is comparable to the time zone API in Bing Maps.
-* [Time zone by coordinate](/rest/api/maps/timezone/gettimezonebycoordinates): Specify a coordinate and get the details for the time zone it falls in.
+* [Time zone by coordinate]: Specify a coordinate and get the details for the time zone it falls in.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross-references the Bing Maps API parameters with the compa
| `query` | N/A - locations must be geocoded first. | | `dateTime` | `timeStamp` | | `includeDstRules` | N/A ΓÇô Always included in response by Azure Maps. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-In addition to this the Azure Maps platform also provides a number of additional time zone APIs to help with conversions with time zone names and IDs;
+In addition to this the Azure Maps platform also provides many other time zone APIs to help with conversions with time zone names and IDs:
-* [Time zone by ID](/rest/api/maps/timezone/gettimezonebyid): Returns current, historical, and future time zone information for the specified IANA time zone ID.
-* [Time zone Enum IANA](/rest/api/maps/timezone/gettimezoneenumiana): Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
-* [Time zone Enum Windows](/rest/api/maps/timezone/gettimezoneenumwindows): Returns a full list of Windows Time Zone IDs.
-* [Time zone IANA version](/rest/api/maps/timezone/gettimezoneianaversion): Returns the current IANA version number used by Azure Maps.
-* [Time zone Windows to IANA](/rest/api/maps/timezone/gettimezonewindowstoiana): Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
+* [Time zone by ID]: Returns current, historical, and future time zone information for the specified IANA time zone ID.
+* [Time zone Enum IANA]: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
+* [Time zone Enum Windows]: Returns a full list of Windows Time Zone IDs.
+* [Time zone IANA version]: Returns the current IANA version number used by Azure Maps.
+* [Time zone Windows to IANA]: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
## Spatial Data Services (SDS)
Batch geocoding is the process of taking a large number of addresses or places,
Bing Maps allows up to 200,000 addresses to be passed in a single batch geocode request. This request goes into a queue and usually processes over a period of time, anywhere from a few minutes to a few hours depending on the size of the data set and the load on the service. Each address in the request generated a transaction.
-Azure Maps has a batch geocoding service, however it allows up to 10,000 addresses to be passed in a single request and is processed over seconds to a few minutes depending on the size of the data set and the load on the service. Each address in the request generated a transaction. In Azure Maps, the batch geocoding service is only available the Gen 2 or S1 pricing tier. For more information on pricing tiers, see [Choose the right pricing tier in Azure Maps](choose-pricing-tier.md).
+Azure Maps has a batch geocoding service, however it allows up to 10,000 addresses to be passed in a single request and is processed over seconds to a few minutes depending on the size of the data set and the load on the service. Each address in the request generated a transaction. In Azure Maps, the batch geocoding service is only available the Gen 2 or S1 pricing tier. For more information on pricing tiers, see [Choose the right pricing tier in Azure Maps].
-Another option for geocoding a large number addresses with Azure Maps is to make parallel requests to the standard search APIs. These services only accept a single address per request but can be used with the S0 tier that also provides free usage limits. The S0 tier allows up to 50 requests per second to the Azure Maps platform from a single account. So if you process limit these to stay within that limit, it is possible to geocode upwards of 180,000 address an hour. The Gen 2 or S1 pricing tier doesnΓÇÖt have a documented limit on the number of queries per second that can be made from an account, so a lot more data can be processed faster when using that pricing tier, however using the batch geocoding service will help reduce the total amount of data transferred and will drastically reduce the network traffic.
+Another option for geocoding a large number addresses with Azure Maps is to make parallel requests to the standard search APIs. These services only accept a single address per request but can be used with the S0 tier that also provides free usage limits. The S0 tier allows up to 50 requests per second to the Azure Maps platform from a single account. So if you process limit these to stay within that limit, it's possible to geocode upwards of 180,000 address an hour. The Gen 2 or S1 pricing tier doesnΓÇÖt have a documented limit on the number of queries per second that can be made from an account, so a lot more data can be processed faster when using that pricing tier, however using the batch geocoding service helps reduce the total amount of data transferred, reducing network traffic.
-* [Free-form address geocoding](/rest/api/maps/search/getsearchaddress): Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding](/rest/api/maps/search/getsearchaddressstructured): Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* [Batch address geocoding](/rest/api/maps/search/postsearchaddressbatchpreview): Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses will be geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
-* [Fuzzy search](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Fuzzy batch search](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
+* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
### Get administrative boundary data
-In Bing Maps, administrative boundaries for countries, states, counties, cities, and postal codes are made available via the Geodata API. This API takes in either a coordinate or query to geocode. If a query is passed in, it is geocoded and the coordinates from the first result is used. This API takes the coordinates and retrieves the boundary of the specified entity type that intersects the coordinate. Note that this API did not necessarily return the boundary for the query that was passed in. If a query for `"Seattle, WA"` is passed in, but the entity type value is set to country region, the boundary for the USA would be returned.
+In Bing Maps, administrative boundaries for countries, states, counties, cities, and postal codes are made available via the Geodata API. This API takes in either a coordinate or query to geocode. If a query is passed in, it's geocoded and the coordinates from the first result is used. This API takes the coordinates and retrieves the boundary of the specified entity type that intersects the coordinate. This API didn't necessarily return the boundary for the query that was passed in. If a query for `"Seattle, WA"` is passed in, but the entity type value is set to country region, the boundary for the USA would be returned.
-Azure Maps also provides access to administrative boundaries (countries, states, counties, cities, and postal codes). To retrieve a boundary, you must query one of the search APIs for the boundary you want (i.e. `Seattle, WA`). If the search result has an associated boundary, a geometry ID will be provided in the result response. The search polygon API can then be used to retrieve the exact boundaries for one or more geometry IDs. This is a bit different than Bing Maps as Azure Maps returns the boundary for what was searched for, whereas Bing Maps returns a boundary for a specified entity type at a specified coordinate. Additionally, the boundary data returned by Azure Maps is in GeoJSON format.
+Azure Maps also provides access to administrative boundaries (countries, states, counties, cities, and postal codes). To retrieve a boundary, you must query one of the search APIs for the boundary you want (such as `Seattle, WA`). If the search result has an associated boundary, a geometry ID is provided in the result response. The search polygon API can then be used to retrieve the exact boundaries for one or more geometry IDs. This is a bit different than Bing Maps as Azure Maps returns the boundary for what was searched for, whereas Bing Maps returns a boundary for a specified entity type at a specified coordinate. Additionally, the boundary data returned by Azure Maps is in GeoJSON format.
To recap: 1. Pass a query for the boundary you want to receive into one of the following search APIs.
- * [Free-form address geocoding](/rest/api/maps/search/getsearchaddress)
- * [Structured address geocoding](/rest/api/maps/search/getsearchaddressstructured)
- * [Batch address geocoding](/rest/api/maps/search/postsearchaddressbatchpreview)
- * [Fuzzy search](/rest/api/maps/search/getsearchfuzzy)
- * [Fuzzy batch search](/rest/api/maps/search/postsearchfuzzybatchpreview)
+ * [Free-form address geocoding]
+ * [Structured address geocoding]
+ * [Batch address geocoding]
+ * [Fuzzy search]
+ * [Fuzzy batch search]
-1. If the desired result(s) has a geometry ID(s), pass it into the [Search Polygon API](/rest/api/maps/search/getsearchpolygon).
+1. If the desired result(s) has a geometry ID(s), pass it into the [Search Polygon API].
### Host and query spatial business data
-The spatial data services in Bing Maps provide a simple spatial data storage solution for hosting business data and exposing it as a spatial REST service. This service provides four main queries; find by property, find nearby, find in bounding box, and find with 1 mile of a route. Many companies who use this service, often already have their business data already stored in a database somewhere and have been uploading a small subset of it into this service to power applications like store locators. Since key-based authentication provides basic security, it has been recommended that this service only be used with public facing data.
+The spatial data services in Bing Maps provide simple spatial data storage solution for hosting business data and exposing it as a spatial REST service. This service provides four main queries; find by property, find nearby, find in bounding box, and find with 1 mile of a route. Many companies who use this service, often already have their business data already stored in a database somewhere and have been uploading a small subset of it into this service to power applications like store locators. Since key-based authentication provides basic security, it has been recommended that this service be used only with public facing data.
-Most business location data starts off in a database. As such it is recommended to use existing Azure storage solutions such as Azure SQL or Azure PostgreSQL (with the PostGIS plugin). Both of these storage solutions support spatial data and provide a rich set of spatial querying capabilities. Once your data is in a suitable storage solution, it can then be integrated into your application by creating a custom web service, or by using a framework such as ASP.NET or Entity Framework. Using this approach provides more querying capabilities and as well as much higher security options.
+Most business location data starts off in a database. As such it's recommended to use existing Azure storage solutions such as Azure SQL or Azure PostgreSQL (with the PostGIS plugin). Both of these storage solutions support spatial data and provide a rich set of spatial querying capabilities. Once your data is in a suitable storage solution, it can then be integrated into your application by creating a custom web service, or by using a framework such as ASP.NET or Entity Framework. Using this approach is more secure and provides more querying capabilities.
Azure Cosmos DB also provides a limited set of spatial capabilities that, depending on your scenario, may be sufficient. Here are some useful resources around hosting and querying spatial data in Azure.
-* [Azure SQL Spatial Data Types overview](/sql/relational-databases/spatial/spatial-data-types-overview)
-* [Azure SQL Spatial ΓÇô Query nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor)
-* [Azure Cosmos DB geospatial capabilities overview](../cosmos-db/sql-query-geospatial-intro.md)
+* [Azure SQL Spatial Data Types overview]
+* [Azure SQL Spatial ΓÇô Query nearest neighbor]
+* [Azure Cosmos DB geospatial capabilities overview]
## Client libraries
-Azure Maps provides client libraries for the following programming languages;
+Azure Maps provides client libraries for the following programming languages:
* JavaScript, TypeScript, Node.js ΓÇô [documentation](./how-to-use-services-module.md) \| [npm package](https://www.npmjs.com/package/azure-maps-rest)
-Open-source client libraries for other programming languages;
+Open-source client libraries for other programming languages:
* .NET Standard 2.0 ΓÇô [GitHub project](https://github.com/perfahlen/AzureMapsRestServices) \| [NuGet package](https://www.nuget.org/packages/AzureMapsRestToolkit/)
Learn more about the Azure Maps REST services.
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account+
+[Search]: /rest/api/maps/search
+[Route directions]: /rest/api/maps/route/getroutedirections
+[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
+[Render]: /rest/api/maps/render/getmapimage
+[Route Range]: /rest/api/maps/route/getrouterange
+[POST Route directions]: /rest/api/maps/route/postroutedirections
+[Route]: /rest/api/maps/route
+[Time Zone]: /rest/api/maps/timezone
+[Elevation]: /rest/api/maps/elevation
+
+[Azure Maps Creator]: creator-indoor-maps.md
+[Spatial operations]: /rest/api/maps/spatial
+[Map Tiles]: /rest/api/maps/render/getmaptile
+[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
+[Batch routing]: /rest/api/maps/route/postroutedirectionsbatchpreview
+[Traffic]: /rest/api/maps/traffic
+[Geolocation API]: /rest/api/maps/geolocation/get-ip-to-location
+[Weather services]: /rest/api/maps/weather
+
+[Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md
+[Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md
+
+[free account]: https://azure.microsoft.com/free/?azure-portal=true
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
+
+[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
+[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
+[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
+[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
+
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[Localization support in Azure Maps]: supported-languages.md
+[Azure Maps supported views]: supported-languages.md#azure-maps-supported-views
+
+[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
+[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
+[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
+
+[POI search]: /rest/api/maps/search/get-search-poi
+[POI category search]: /rest/api/maps/search/get-search-poi-category
+[Calculate route]: /rest/api/maps/route/getroutedirections
+[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
+
+[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path?azure-portal=true
+[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic?azure-portal=true
+
+[quadtree tile pyramid math]: zoom-levels-and-tile-grid.md
+[turf js]: https://turfjs.org?azure-portal=true
+[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite?azure-portal=true
+
+[Map image render]: /rest/api/maps/render/getmapimagerytile
+[Supported map styles]: supported-map-styles.md
+[Render custom data on a raster map]: how-to-render-custom-data.md
+
+[Search along route]: /rest/api/maps/search/postsearchalongroute
+[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[nearby search]: /rest/api/maps/search/getsearchnearby
+[Search Polygon API]: /rest/api/maps/search/getsearchpolygon
+[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
+
+[Traffic flow segments]: /rest/api/maps/traffic/gettrafficflowsegment
+[Traffic flow tiles]: /rest/api/maps/traffic/gettrafficflowtile
+[Traffic incident details]: /rest/api/maps/traffic/gettrafficincidentdetail
+[Traffic incident tiles]: /rest/api/maps/traffic/gettrafficincidenttile
+[Traffic incident viewport]: /rest/api/maps/traffic/gettrafficincidentviewport
+
+[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
+[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
+[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
+[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
+[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
+[Time zone by coordinate]: /rest/api/maps/timezone/gettimezonebycoordinates
+
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+
+[Azure SQL Spatial Data Types overview]: /sql/relational-databases/spatial/spatial-data-types-overview
+[Azure SQL Spatial ΓÇô Query nearest neighbor]: /sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor
+[Azure Cosmos DB geospatial capabilities overview]: ../cosmos-db/sql-query-geospatial-intro.md
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
The following table provides a high-level list of Bing Maps features and the rel
| Autosuggest | Γ£ô | | Directions (including truck) | Γ£ô | | Distance Matrix | Γ£ô |
-| Elevations | Γ£ô |
+| Elevations | <sup>1</sup> |
| Imagery ΓÇô Static Map | Γ£ô | | Imagery Metadata | Γ£ô | | Isochrones | Γ£ô |
The following table provides a high-level list of Bing Maps features and the rel
| Traffic Incidents | Γ£ô | | Configuration driven maps | N/A |
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+ Bing Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and highly secure, Azure Active Directory authentication. ## Licensing considerations
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
The table lists key API features in the Google Maps V3 JavaScript SDK and the su
| Geocoder service | Γ£ô | | Directions service | Γ£ô | | Distance Matrix service | Γ£ô |
-| Elevation service | Γ£ô |
+| Elevation service | <sup>1</sup> |
+
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
## Notable differences in the web SDKs
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
You will also learn:
The table shows the Azure Maps service APIs, which have a similar functionality to the listed Google Maps service APIs.
-| Google Maps service API | Azure Maps service API |
-|-||
-| Directions | [Route](/rest/api/maps/route) |
-| Distance Matrix | [Route Matrix](/rest/api/maps/route/postroutematrixpreview) |
-| Geocoding | [Search](/rest/api/maps/search) |
-| Places Search | [Search](/rest/api/maps/search) |
-| Place Autocomplete | [Search](/rest/api/maps/search) |
-| Snap to Road | See [Calculate routes and directions](#calculate-routes-and-directions) section. |
-| Speed Limits | See [Reverse geocode a coordinate](#reverse-geocode-a-coordinate) section. |
-| Static Map | [Render](/rest/api/maps/render/getmapimage) |
-| Time Zone | [Time Zone](/rest/api/maps/timezone) |
-| Elevation | [Elevation](/rest/api/maps/elevation) |
+| Google Maps service API | Azure Maps service API |
+|-|--|
+| Directions | [Route](/rest/api/maps/route) |
+| Distance Matrix | [Route Matrix](/rest/api/maps/route/postroutematrixpreview) |
+| Geocoding | [Search](/rest/api/maps/search) |
+| Places Search | [Search](/rest/api/maps/search) |
+| Place Autocomplete | [Search](/rest/api/maps/search) |
+| Snap to Road | See [Calculate routes and directions](#calculate-routes-and-directions) section. |
+| Speed Limits | See [Reverse geocode a coordinate](#reverse-geocode-a-coordinate) section. |
+| Static Map | [Render](/rest/api/maps/render/getmapimage) |
+| Time Zone | [Time Zone](/rest/api/maps/timezone) |
+| Elevation | [Elevation](/rest/api/maps/elevation)<sup>1</sup> |
+
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
The following service APIs aren't currently available in Azure Maps:
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
The table provides a high-level list of Azure Maps features, which correspond to
| REST Service APIs | Γ£ô | | Directions (Routing) | Γ£ô | | Distance Matrix | Γ£ô |
-| Elevation | Γ£ô |
+| Elevation | <sup>1</sup> |
| Geocoding (Forward/Reverse) | Γ£ô | | Geolocation | Γ£ô | | Nearest Roads | Γ£ô |
The table provides a high-level list of Azure Maps features, which correspond to
| Maps Embedded API | N/A | | Map URLs | N/A |
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+ Google Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and Azure Active Directory authentication. Azure Active Directory authentication provides more security features, compared to the basic key-based authentication. ## Licensing considerations
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Azure Maps Java SDK supports [Java 8][Java 8] or above.
| [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] | | [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] | | [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] |
-| [Elevation][java elevation readme] | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
+| [Elevation][java elevation readme] ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
For more information, see the [Java SDK Developers Guide].
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
The following table summarizes the Azure Maps services that generate transaction
| Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-| | [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2)<br>[Data registry](/rest/api/maps/data-registry) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
-| [Elevation (DEM)](/rest/api/maps/elevation)| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>|
+| [Elevation (DEM)](/rest/api/maps/elevation)([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023))| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>|
| [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>| | [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>| | [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 3/24/2023 Last updated : 3/30/2023
In addition to the generally available data collection listed above, Azure Monit
| : | : | : | : | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) | | [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - |
-| [Change Tracking](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |
+| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |
| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
+| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) (available without Azure Monitor Agent) | Migrate to Azure Automation Hybrid Worker Extension - Generally available | None | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | | Azure Stack HCI Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) | | Azure Virtual Desktop (AVD) Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in both Azure and non-Azure (on-premises and third-party clouds) environments. It introduces a simplified, flexible method of configuring collection configuration called [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent and provides guidance on how to implement a successful migration. > [!IMPORTANT]
-> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you're currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent by using the information in this article.
+> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you're currently using the Log Analytics agent with Azure Monitor or [other supported features and services](./agents-overview.md#supported-services-and-features), you should start planning your migration to Azure Monitor Agent by using the information in this article and the availability of other solutions/services.
## Benefits
In addition to consolidating and improving upon legacy Log Analytics agents, Azu
2. Deploy extensions and DCR-associations: 1. **Test first** by deploying extensions<sup>2</sup> and DCR-Associations on a few non-production machines. You can also deploy side-by-side on machines running legacy agents (see the section above for agent coexistence
- 2. Once data starts flowing via Azure Monitor agent, **compare it with legacy agent data** to ensure there are no gaps. You can do this by joining with the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' for the new data collection
+ 2. Once data starts flowing via Azure Monitor agent, **compare it with legacy agent data** to ensure there are no gaps. You can do this by joining with the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' for the new data collection.
3. Post testing, you can **roll out broadly**<sup>3</sup> using [built-in policies]() for at-scale deployment of extensions and DCR-associations. **Using policy will also ensure automatic deployment of extensions and DCR-associations for any new machines in future.** 4. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines
-3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly.
+3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly. You can do this by joining with/looking at the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' vs 'Direct Agent' (for legacy).
4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents** 1. If you have migrated to Azure Monitor agent for selected features/solutions and you need to continue using the legacy Log Analytics for others, you can
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Title: Set up the Azure Monitor agent on Windows client devices description: This article describes the instructions to install the agent on Windows 10, 11 client OS devices, configure data collection, manage and troubleshoot the agent. Previously updated : 1/9/2023 Last updated : 3/30/2023
This article provides instructions and guidance for using the client installer f
Using the new client installer described here, you can now collect telemetry data from your Windows client devices in addition to servers and virtual machines. Both the [extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) and this installer use Data Collection rules to configure the **same underlying agent**.
+> [!NOTE]
+> This article provides specific guidance for installing the Azure Monitor agent on Windows client devices, subject to [limitations below](#limitations). For standard installation and management guidance for the agent, refer [the agent extension management guidance here](./azure-monitor-agent-manage.md)
+ ### Comparison with virtual machine extension Here is a comparison between client installer and VM extension for Azure Monitor agent:
Here is a comparison between client installer and VM extension for Azure Monitor
| Virtual machines, scale sets | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework | | On-premises servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premises by installing Arc agent |
+## Limitations
+1. The Windows client installer supports latest Windows machines only that are **Azure AD joined** or hybrid Azure AD joined. More information under [prerequisites](#prerequisites) below
+2. The Data Collection rules need can only target the Azure AD tenant scope, i.e. all DCRs associated to the tenant (via Monitored Object) will apply to all Windows client machines within that tenant with the agent installed using this client installer. **Granular targeting using DCRs is not supported** for Windows client devices yet
+3. No support for Windows machines connected via **Azure private links**
+4. The agent installed using the Windows client installer is designed mainly for Windows desktops or workstations that are **always connected**. While the agent can be installed via this method on laptops, it is not optimized for battery consumption and network limitations on a laptop.
## Prerequisites 1. The machine must be running Windows client OS version 10 RS4 or higher.
Here is a comparison between client installer and VM extension for Azure Monitor
6. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating. > [!NOTE]
-> The agent installed with the client installer currently doesn't support updating configuration once it is installed. Uninstall and reinstall AMA to update its configuration.
+> The agent installed with the client installer currently doesn't support updating local agent settings once it is installed. Uninstall and reinstall AMA to update above settings.
## Create and associate a 'Monitored Object'
-You need to create a 'Monitored Object' (MO) that creates a representation for the Azure AD tenant within Azure Resource Manager (ARM). This ARM entity is what Data Collection Rules are then associated with.
+You need to create a 'Monitored Object' (MO) that creates a representation for the Azure AD tenant within Azure Resource Manager (ARM). This ARM entity is what Data Collection Rules are then associated with. **This Monitored Object needs to be created only once for any number of machines in a single AAD tenant**.
Currently this association is only **limited** to the Azure AD tenant scope, which means configuration applied to the tenant will be applied to all devices that are part of the tenant and running the agent. The image below demonstrates how this works:
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file. Text file requirements and best practices:
- - Do store files on the local drive of the machine on which Azure Monitor Agent is running.
+ - Do store files on the local drive of the machine on which Azure Monitor Agent is running and in the directory that is being monitored.
- Do delineate the end of a record with an end of line. - Do use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported. - Do create a new log file every day so that you can remove old files easily. - Do clean up all log files older than 2 days in the monitored directory. Azure Monitor Agent does not delete old log files and tracking them uses up Agent resources. - Do Not overwrite an existing file with new data. You should only append new data to the file.
- - Do Not rename a file and open a new file with the same name to log to.
- - Do Not rename or copy large log files in to the monitored directory.
+ - Do Not rename a file and open a new file with the same name to log to.
+ - Do Not rename or copy large log files in to the monitored directory. If you must, do not exceed 50MB per minute
- Do Not rename files in the monitored directory to a new name that is also in the monitored directory. This can cause incorrect ingestion behavior.
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
Title: Application Insights telemetry data model
-description: This article describes the Application Insights telemetry data model including Request, Dependency, Exception, Trace, Event, Metric, PageView, and Context.
+description: This article describes the Application Insights telemetry data model including request, dependency, exception, trace, event, metric, PageView, and context.
documentationcenter: .net
# Application Insights telemetry data model
-[Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal so that you can analyze the performance and usage of your application. The telemetry model is standardized, so it's possible to create platform and language-independent monitoring.
+[Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal so that you can analyze the performance and usage of your application. The telemetry model is standardized, so it's possible to create platform- and language-independent monitoring.
Data collected by Application Insights models this typical application execution pattern. ![Diagram that shows an Application Insights telemetry data model.](./media/data-model-complete/application-insights-data-model.png)
-The following types of telemetry are used to monitor the execution of your app. Three types are automatically collected by the Application Insights SDK from the web application framework:
+The following types of telemetry are used to monitor the execution of your app. The Application Insights SDK from the web application framework automatically collects these three types:
* [Request](#request): Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives.
To report data model or schema problems and suggestions, use our [GitHub reposit
A request telemetry item in [Application Insights](./app-insights-overview.md) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by a unique `id` and `url` that contain all the execution parameters.
-You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions can be grouped further by `resultCode`. Start time for the request telemetry is defined on the envelope level.
+You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. You can further group success and failure executions by using `resultCode`. Start time for the request telemetry is defined on the envelope level.
Request telemetry supports the standard extensibility model by using custom `properties` and `measurements`.
Request telemetry supports the standard extensibility model by using custom `pro
### Name
-The name of the request represents the code path taken to process the request. A low cardinality value allows for better grouping of requests. For HTTP requests, it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value.
+This field is the name of the request and it represents the code path taken to process the request. A low cardinality value allows for better grouping of requests. For HTTP requests, it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value.
The Application Insights web SDK sends a request name "as is" about letter case. Grouping on the UI is case sensitive, so `GET /Home/Index` is counted separately from `GET /home/INDEX` even though often they result in the same controller and action execution. The reason for that is that URLs in general are [case sensitive](https://www.w3.org/TR/WD-html40-970708/htmlweb.html). You might want to see if all `404` errors happened for URLs typed in uppercase. You can read more about request name collection by the ASP.NET web SDK in the [blog post](https://apmtips.com/posts/2015-02-23-request-name-and-url/).
-**Maximum length**: 1,024 characters
+**Maximum length:** 1,024 characters
### ID ID is the identifier of a request call instance. It's used for correlation between the request and other telemetry items. The ID should be globally unique. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
-**Maximum length**: 128 characters
+**Maximum length:** 128 characters
### URL URL is the request URL with all query string parameters.
-**Maximum length**: 2,048 characters
+**Maximum length:** 2,048 characters
### Source Source is the source of the request. Examples are the instrumentation key of the caller or the IP address of the caller. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
-**Maximum length**: 1,024 characters
+**Maximum length:** 1,024 characters
### Duration
The request duration is formatted as `DD.HH:MM:SS.MMMMMM`. It must be positive a
The response code is the result of a request execution. It's the HTTP status code for HTTP requests. It might be an `HRESULT` value or an exception type for other request types.
-**Maximum length**: 1,024 characters
+**Maximum length:** 1,024 characters
### Success
-Success indicates whether a call was successful or unsuccessful. This field is required. When a request isn't set explicitly to `false`, it's considered to be successful. Set this value to `false` if the operation was interrupted by an exception or a returned error result code.
+Success indicates whether a call was successful or unsuccessful. This field is required. When a request isn't set explicitly to `false`, it's considered to be successful. If an exception or returned error result code interrupted the operation, set this value to `false`.
For web applications, Application Insights defines a request as successful when the response code is less than `400` or equal to `401`. However, there are cases when this default mapping doesn't match the semantics of the application. Response code `404` might indicate "no records," which can be part of regular flow. It also might indicate a broken link. For broken links, you can implement more advanced logic. You can mark broken links as failures only when those links are located on the same site by analyzing the URL referrer. Or you can mark them as failures when they're accessed from the company's mobile application. Similarly, `301` and `302` indicate failure when they're accessed from the client that doesn't support redirect.
-Partially accepted content `206` might indicate a failure of an overall request. For instance, an Application Insights endpoint might receive a batch of telemetry items as a single request. It returns `206` when some items in the batch weren't processed successfully. An increasing rate of `206` indicates a problem that needs to be investigated. Similar logic applies to `207` Multi-Status where the success might be the worst of separate response codes.
+Partially accepted content `206` might indicate a failure of an overall request. For instance, an Application Insights endpoint might receive a batch of telemetry items as a single request. It returns `206` when some items in the batch weren't processed successfully. An increasing rate of `206` indicates a problem that needs to be investigated. Similar logic applies to `207` Multi-Status, where the success might be the worst of separate response codes.
You can read more about the request result code and status code in the [blog post](https://apmtips.com/posts/2016-12-03-request-success-and-response-code/).
You can read more about the request result code and status code in the [blog pos
## Dependency
-Dependency Telemetry (in [Application Insights](./app-insights-overview.md)) represents an interaction of the monitored component with a remote component such as SQL or an HTTP endpoint.
+Dependency telemetry (in [Application Insights](./app-insights-overview.md)) represents an interaction of the monitored component with a remote component such as SQL or an HTTP endpoint.
### Name
-Name of the command initiated with this dependency call. Low cardinality value. Examples are stored procedure name and URL path template.
+This field is the name of the command initiated with this dependency call. It has a low cardinality value. Examples are stored procedure name and URL path template.
### ID
-Identifier of a dependency call instance. Used for correlation with the request telemetry item corresponding to this dependency call. For more information, see [correlation](./correlation.md) page.
+ID is the identifier of a dependency call instance. It's used for correlation with the request telemetry item that corresponds to this dependency call. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
### Data
-Command initiated by this dependency call. Examples are SQL statement and HTTP URL with all query parameters.
+This field is the command initiated by this dependency call. Examples are SQL statement and HTTP URL with all query parameters.
### Type
-Dependency type name. Low cardinality value for logical grouping of dependencies and interpretation of other fields like commandName and resultCode. Examples are SQL, Azure table, and HTTP.
+This field is the dependency type name. It has a low cardinality value for logical grouping of dependencies and interpretation of other fields like `commandName` and `resultCode`. Examples are SQL, Azure table, and HTTP.
### Target
-Target site of a dependency call. Examples are server name, host address. For more information, see [correlation](./correlation.md) page.
+This field is the target site of a dependency call. Examples are server name and host address. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
### Duration
-Request duration in format: `DD.HH:MM:SS.MMMMMM`. Must be less than `1000` days.
+The request duration is in the format `DD.HH:MM:SS.MMMMMM`. It must be less than `1000` days.
### Result code
-Result code of a dependency call. Examples are SQL error code and HTTP status code.
+This field is the result code of a dependency call. Examples are SQL error code and HTTP status code.
### Success
-Indication of successful or unsuccessful call.
+This field is the indication of a successful or unsuccessful call.
### Custom properties
Indication of successful or unsuccessful call.
## Exception
-In [Application Insights](./app-insights-overview.md), an instance of Exception represents a handled or unhandled exception that occurred during execution of the monitored application.
+In [Application Insights](./app-insights-overview.md), an instance of exception represents a handled or unhandled exception that occurred during execution of the monitored application.
-### Problem Id
+### Problem ID
-Identifier of where the exception was thrown in code. Used for exceptions grouping. Typically a combination of exception type and a function from the call stack.
+The problem ID identifies where the exception was thrown in code. It's used for exceptions grouping. Typically, it's a combination of an exception type and a function from the call stack.
-Max length: 1024 characters
+**Maximum length:** 1,024 characters
### Severity level
-Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`, `Critical`.
+This field is the trace severity level. The value can be `Verbose`, `Information`, `Warning`, `Error`, or `Critical`.
### Exception details
Trace telemetry in [Application Insights](./app-insights-overview.md) represents
Trace message.
-**Maximum length**: 32,768 characters
+**Maximum length:** 32,768 characters
### Severity level Trace severity level.
-**Values**: `Verbose`, `Information`, `Warning`, `Error`, and `Critical`
+**Values:** `Verbose`, `Information`, `Warning`, `Error`, and `Critical`
### Custom properties
Trace severity level.
## Event
-You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically, it's a user interaction such as a button click or order checkout. It can also be an application lifecycle event like initialization or a configuration update.
+You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically, it's a user interaction such as a button click or an order checkout. It can also be an application lifecycle event like initialization or a configuration update.
-Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be subject to separate, less aggressive [sampling](./api-filtering-sampling.md).
+Semantically, events might or might not be correlated to requests. If used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be subject to separate, less aggressive [sampling](./api-filtering-sampling.md).
### Name
-Event name: To allow proper grouping and useful metrics, restrict your application so that it generates a few separate event names. For example, don't use a separate name for each generated instance of an event.
+**Event name:** To allow proper grouping and useful metrics, restrict your application so that it generates a few separate event names. For example, don't use a separate name for each generated instance of an event.
**Maximum length:** 512 characters
Event name: To allow proper grouping and useful metrics, restrict your applicati
## Metric
-There are two types of metric telemetry supported by [Application Insights](./app-insights-overview.md): single measurement and pre-aggregated metric. Single measurement is just a name and value. Pre-aggregated metric specifies minimum and maximum value of the metric in the aggregation interval and standard deviation of it.
+[Application Insights](./app-insights-overview.md) supports two types of metric telemetry: single measurement and preaggregated metric. Single measurement is just a name and value. Preaggregated metric specifies the minimum and maximum value of the metric in the aggregation interval and the standard deviation of it.
-Pre-aggregated metric telemetry assumes that aggregation period was one minute.
+Preaggregated metric telemetry assumes that the aggregation period was one minute.
-There are several well-known metric names supported by Application Insights. These metrics placed into performanceCounters table.
+Application Insights supports several well-known metric names. These metrics are placed into the `performanceCounters` table.
-Metric representing system and process counters:
+The following table shows the metrics that represent system and process counters.
-| **.NET name** | **Platform agnostic name** | **Description**
-| - | -- | -
-| `\Processor(_Total)\% Processor Time` | Work in progress... | total machine CPU
-| `\Memory\Available Bytes` | Work in progress... | Shows the amount of physical memory, in bytes, available to processes running on the computer. It is calculated by summing the amount of space on the zeroed, free, and standby memory lists. Free memory is ready for use; zeroed memory consists of pages of memory filled with zeros to prevent later processes from seeing data used by a previous process; standby memory is memory that has been removed from a process's working set (its physical memory) en route to disk but is still available to be recalled. See [Memory Object](/previous-versions/ms804008(v=msdn.10))
-| `\Process(??APP_WIN32_PROC??)\% Processor Time` | Work in progress... | CPU of the process hosting the application
-| `\Process(??APP_WIN32_PROC??)\Private Bytes` | Work in progress... | memory used by the process hosting the application
-| `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | Work in progress... | rate of I/O operations runs by process hosting the application
-| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests/Sec` | Work in progress... | rate of requests processed by application
-| `\.NET CLR Exceptions(??APP_CLR_PROC??)\# of Exceps Thrown / sec` | Work in progress... | rate of exceptions thrown by application
-| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Request Execution Time` | Work in progress... | average requests execution time
-| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests In Application Queue` | Work in progress... | number of requests waiting for the processing in a queue
+| .NET name | Platform-agnostic name | Description
+| - | -- | -
+| `\Processor(_Total)\% Processor Time` | Work in progress... | Total machine CPU.
+| `\Memory\Available Bytes` | Work in progress... | Shows the amount of physical memory, in bytes, available to processes running on the computer. It's calculated by summing the amount of space on the zeroed, free, and standby memory lists. Free memory is ready for use. Zeroed memory consists of pages of memory filled with zeros to prevent later processes from seeing data used by a previous process. Standby memory is memory that's been removed from a process's working set (its physical memory) en route to disk but is still available to be recalled. See [Memory Object](/previous-versions/ms804008(v=msdn.10)).
+| `\Process(??APP_WIN32_PROC??)\% Processor Time` | Work in progress... | CPU of the process hosting the application.
+| `\Process(??APP_WIN32_PROC??)\Private Bytes` | Work in progress... | Memory used by the process hosting the application.
+| `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | Work in progress... | Rate of I/O operations run by the process hosting the application.
+| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests/Sec` | Work in progress... | Rate of requests processed by an application.
+| `\.NET CLR Exceptions(??APP_CLR_PROC??)\# of Exceps Thrown / sec` | Work in progress... | Rate of exceptions thrown by an application.
+| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Request Execution Time` | Work in progress... | Average request execution time.
+| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests In Application Queue` | Work in progress... | Number of requests waiting for the processing in a queue.
-See [Metrics - Get](/rest/api/application-insights/metrics/get) for more information on the Metrics REST API.
+For more information on the Metrics REST API, see [Metrics - Get](/rest/api/application-insights/metrics/get).
### Name
-Name of the metric you'd like to see in Application Insights portal and UI.
+This field is the name of the metric you want to see in the Application Insights portal and UI.
### Value
-Single value for measurement. Sum of individual measurements for the aggregation.
+This field is the single value for measurement. It's the sum of individual measurements for the aggregation.
### Count
-Metric weight of the aggregated metric. Should not be set for a measurement.
+This field is the metric weight of the aggregated metric. It shouldn't be set for a measurement.
### Min
-Minimum value of the aggregated metric. Should not be set for a measurement.
+This field is the minimum value of the aggregated metric. It shouldn't be set for a measurement.
### Max
-Maximum value of the aggregated metric. Should not be set for a measurement.
+This field is the maximum value of the aggregated metric. It shouldn't be set for a measurement.
### Standard deviation
-Standard deviation of the aggregated metric. Should not be set for a measurement.
+This field is the standard deviation of the aggregated metric. It shouldn't be set for a measurement.
### Custom properties
-Metric with the custom property `CustomPerfCounter` set to `true` indicate that the metric represents the Windows performance counter. These metrics placed in performanceCounters table. Not in customMetrics. Also the name of this metric is parsed to extract category, counter, and instance names.
+The metric with the custom property `CustomPerfCounter` set to `true` indicates that the metric represents the Windows performance counter. These metrics are placed in the `performanceCounters` table, not in `customMetrics`. Also, the name of this metric is parsed to extract category, counter, and instance names.
[!INCLUDE [application-insights-data-model-properties](../../../includes/application-insights-data-model-properties.md)] ## PageView
-PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that is defined by the developer to be an application tab or a screen and isn't necessarily correlated to a browser webpage load or refresh action. This distinction can be further understood in the context of single-page applications (SPA) where the switch between pages isn't tied to browser page actions. [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user.
+PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that's defined by the developer to be an application tab or a screen and isn't necessarily correlated to a browser webpage load or a refresh action. This distinction can be further understood in the context of single-page applications (SPAs), where the switch between pages isn't tied to browser page actions. The [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user.
> [!NOTE]
-> * By default, Application Insights SDKs log single PageView events on each browser webpage load action, with [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measuring-browsertiming-in-application-insights). Developers can extend additional tracking of PageView events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
-> * The default logs retention is 30 days and needs to be adjusted if you want to view page view statistics over a longer period of time.
+> * By default, Application Insights SDKs log single `PageView` events on each browser webpage load action, with [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measure-browsertiming-in-application-insights). Developers can extend additional tracking of `PageView` events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
+> * The default logs retention is 30 days. If you want to view `PageView` statistics over a longer period of time, you must adjust the setting.
-### Measuring browserTiming in Application Insights
+### Measure browserTiming in Application Insights
Modern browsers expose measurements for page load actions with the [Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance_API). Application Insights simplifies these measurements by consolidating related timings into [standard browser metrics](../essentials/metrics-supported.md#microsoftinsightscomponents) as defined by these processing time definitions:
-* Client <--> DNS: Client reaches out to DNS to resolve website hostname, DNS responds with IP address.
-* Client <--> Web Server: Client creates TCP then TLS handshakes with web server.
-* Client <--> Web Server: Client sends request payload, waits for server to execute request, and receives first response packet.
-* Client <--Web Server: Client receives the rest of the response payload bytes from the web server.
-* Client: Client now has full response payload and has to render contents into browser and load the DOM.
-
+* **Client <--> DNS**: Client reaches out to DNS to resolve website hostname, and DNS responds with the IP address.
+* **Client <--> Web Server**: Client creates TCP and then TLS handshakes with the web server.
+* **Client <--> Web Server**: Client sends request payload, waits for the server to execute the request, and receives the first response packet.
+* **Client <--Web Server**: Client receives the rest of the response payload bytes from the web server.
+* **Client**: Client now has full response payload and has to render contents into the browser and load the DOM.
+ * `browserTimings/networkDuration` = #1 + #2 * `browserTimings/sendDuration` = #3 * `browserTimings/receiveDuration` = #4 * `browserTimings/processingDuration` = #5 * `browsertimings/totalDuration` = #1 + #2 + #3 + #4 + #5 * `pageViews/duration`
- * The PageView duration is from the browserΓÇÖs performance timing interface, [`PerformanceNavigationTiming.duration`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceEntry/duration).
- * If `PerformanceNavigationTiming` is available that duration is used.
- * If itΓÇÖs not, then the *deprecated* [`PerformanceTiming`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming) interface is used and the delta between [`NavigationStart`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/navigationStart) and [`LoadEventEnd`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/loadEventEnd) is calculated.
- * The developer specifies a duration value when logging custom PageView events using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
+ * The `PageView` duration is from the browser's performance timing interface, [`PerformanceNavigationTiming.duration`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceEntry/duration).
+ * If `PerformanceNavigationTiming` is available, that duration is used.
+
+ If it's not, the *deprecated* [`PerformanceTiming`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming) interface is used and the delta between [`NavigationStart`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/navigationStart) and [`LoadEventEnd`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/loadEventEnd) is calculated.
+ * The developer specifies a duration value when logging custom `PageView` events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
## Context
Originally, this field was used to indicate the type of the device the user of t
### Operation ID
-This field is the unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. For more information, see [Telemetry correlation](./correlation.md). The operation ID is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view.
+This field is the unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. For more information, see [Telemetry correlation](./correlation.md). Either a request or a page view creates the operation ID. All other telemetry sets this field to the value for the containing request or page view.
**Maximum length:** 128
This field is the unique identifier of the telemetry item's immediate parent. Fo
### Operation name
-This field is the name (group) of the operation. The operation name is created by either a request or a page view. All other telemetry items set this field to the value for the containing request or page view. The operation name is used for finding all the telemetry items for a group of operations (for example, `GET Home/Index`). This context property is used to answer questions like What are the typical exceptions thrown on this page?
+This field is the name (group) of the operation. Either a request or a page view creates the operation name. All other telemetry items set this field to the value for the containing request or page view. The operation name is used for finding all the telemetry items for a group of operations (for example, `GET Home/Index`). This context property is used to answer questions like What are the typical exceptions thrown on this page?
**Maximum length:** 1,024
Session ID is the instance of the user's interaction with the app. Information i
The anonymous user ID (User.Id) represents the user of the application. When telemetry is sent from a service, the user context is about the user who initiated the operation in the service.
-[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. A sampling algorithm attempts to either sample in or out all the correlated telemetry. An anonymous user ID is used for sampling score generation, so an anonymous user ID should be a random enough value.
+[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. A sampling algorithm attempts to either sample in or out all the correlated telemetry. An anonymous user ID is used for sampling score generation, so an anonymous user ID should be a random-enough value.
> [!NOTE] > The count of anonymous user IDs isn't the same as the number of unique application users. The count of anonymous user IDs is typically higher because each time the user opens your app on a different device or browser, or cleans up browser cookies, a new unique anonymous user ID is allocated. This calculation might result in counting the same physical users multiple times.
-User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
+User IDs can be cross-referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
Using an anonymous user ID to store a username is a misuse of the field. Use an authenticated user ID.
An authenticated user ID is the opposite of an anonymous user ID. This field rep
Use the Application Insights SDK to initialize the authenticated user ID with a value that identifies the user persistently across browsers and devices. In this way, all telemetry items are attributed to that unique ID. This ID enables querying for all telemetry collected for a specific user (subject to [sampling configurations](./sampling.md) and [telemetry filtering](./api-filtering-sampling.md)).
-User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
+User IDs can be cross-referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
**Maximum length:** 1,024 ### Account ID
-The account ID, in multi-tenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when a user ID and an authenticated user ID aren't sufficient. Examples might be a subscription ID for the Azure portal or the blog name for a blogging platform.
+The account ID, in multitenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when a user ID and an authenticated user ID aren't sufficient. Examples might be a subscription ID for the Azure portal or the blog name for a blogging platform.
**Maximum length:** 1,024
This field is the name of the instance where the application is running. For exa
### Internal: SDK version
-For more information, see this [SDK version article](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/EndpointSpecs/SDK-VERSIONS.md).
+For more information, see [SDK version](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/EndpointSpecs/SDK-VERSIONS.md).
**Maximum length:** 64
This field represents the node name used for billing purposes. Use it to overrid
## Next steps
-Learn how to use [Application Insights API for custom events and metrics](./api-custom-events-metrics.md), including:
+Learn how to use the [Application Insights API for custom events and metrics](./api-custom-events-metrics.md), including:
- [Custom request telemetry](./api-custom-events-metrics.md#trackrequest) - [Custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) - [Custom trace telemetry](./api-custom-events-metrics.md#tracktrace)
Set up dependency tracking for:
- [.NET](./asp-net-dependencies.md) - [Java](./opentelemetry-enable.md?tabs=java)
-Learn more:
+To learn more:
- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. - Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet). - Explore [.NET trace logs in Application Insights](./asp-net-trace-logs.md). - Explore [Java trace logs in Application Insights](./opentelemetry-enable.md?tabs=java#logs).-- Learn about [Azure Functions' built-in integration with Application Insights](../../azure-functions/functions-monitoring.md?toc=/azure/azure-monitor/toc.json) to monitor functions executions.
+- Learn about the [Azure Functions built-in integration with Application Insights](../../azure-functions/functions-monitoring.md?toc=/azure/azure-monitor/toc.json) to monitor functions executions.
- Learn how to [configure an ASP.NET Core](./asp-net.md) application with Application Insights. - Learn how to [diagnose exceptions in your web apps with Application Insights](./asp-net-exceptions.md). - Learn how to [extend and filter telemetry](./api-filtering-sampling.md).-- Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model.
+- Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model.
azure-monitor Distributed Tracing Telemetry Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md
+
+ Title: Distributed tracing and telemetry correlation in Azure Application Insights
+description: This article provides information about distributed tracing and telemetry correlation
+ Last updated : 03/30/2023+
+ms.devlang: csharp, java, javascript, python
+++
+# What is distributed tracing and telemetry correlation?
+
+Modern cloud and [microservices](https://azure.com/microservices) architectures have enabled simple, independently deployable services that reduce costs while increasing availability and throughput. However, it has made overall systems more difficult to reason about and debug. Distributed tracing solves this problem by providing a performance profiler that works like call stacks for cloud and microservices architectures.
+
+Azure Monitor provides two experiences for consuming distributed trace data: the [transaction diagnostics](./transaction-diagnostics.md) view for a single transaction/request and the [application map](./app-map.md) view to show how systems interact.
+
+[Application Insights](app-insights-overview.md#application-insights-overview) can monitor each component separately and detect which component is responsible for failures or performance degradation by using distributed telemetry correlation. This article explains the data model, context-propagation techniques, protocols, and implementation of correlation tactics on different languages and platforms used by Application Insights.
+
+## Enable distributed tracing
+
+To enable distributed tracing for an application, add the right agent, SDK, or library to each service based on its programming language.
+
+### Enable via Application Insights through auto-instrumentation or SDKs
+
+The Application Insights agents and SDKs for .NET, .NET Core, Java, Node.js, and JavaScript all support distributed tracing natively. Instructions for installing and configuring each Application Insights SDK are available for:
+
+* [.NET](asp-net.md)
+* [.NET Core](asp-net-core.md)
+* [Java](./opentelemetry-enable.md?tabs=java)
+* [Node.js](../app/nodejs.md)
+* [JavaScript](./javascript.md#enable-distributed-tracing)
+* [Python](opencensus-python.md)
+
+With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in the [Dependency auto-collection documentation](asp-net-dependencies.md#dependency-auto-collection).
+
+ Any technology also can be tracked manually with a call to [TrackDependency](./api-custom-events-metrics.md) on the [TelemetryClient](./api-custom-events-metrics.md).
+
+### Enable via OpenTelemetry
+
+Application Insights now supports distributed tracing through [OpenTelemetry](https://opentelemetry.io/). OpenTelemetry provides a vendor-neutral instrumentation to send traces, metrics, and logs to Application Insights. Initially, the OpenTelemetry community took on distributed tracing. Metrics and logs are still in progress.
+
+A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include distributed tracing. Our Java OpenTelemetry-based Azure Monitor offering is generally available and fully supported.
+
+The following pages consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project.
+
+* [.NET](opentelemetry-enable.md?tabs=net)
+* [Java](opentelemetry-enable.md?tabs=java)
+* [Node.js](opentelemetry-enable.md?tabs=nodejs)
+* [Python](opentelemetry-enable.md?tabs=python)
+
+### Enable via OpenCensus
+
+In addition to the Application Insights SDKs, Application Insights also supports distributed tracing through [OpenCensus](https://opencensus.io/). OpenCensus is an open-source, vendor-agnostic, single distribution of libraries to provide metrics collection and distributed tracing for services. It also enables the open-source community to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB. [Microsoft collaborates on OpenCensus with several other monitoring and cloud partners](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/).
+
+For more information on OpenCensus for Python, see [Set up Azure Monitor for your Python application](opencensus-python.md).
+
+The OpenCensus website maintains API reference documentation for [Python](https://opencensus.io/api/python/trace/usage.html), [Go](https://godoc.org/go.opencensus.io), and various guides for using OpenCensus.
+
+## Data model for telemetry correlation
+
+Application Insights defines a [data model](../../azure-monitor/app/data-model-complete.md) for distributed telemetry correlation. To associate telemetry with a logical operation, every telemetry item has a context field called `operation_Id`. Every telemetry item in the distributed trace shares this identifier. So even if you lose telemetry from a single layer, you can still associate telemetry reported by other components.
+
+A distributed logical operation typically consists of a set of smaller operations that are requests processed by one of the components. [Request telemetry](../../azure-monitor/app/data-model-complete.md#request) defines these operations. Every request telemetry item has its own `id` that identifies it uniquely and globally. And all telemetry items (such as traces and exceptions) that are associated with the request should set the `operation_parentId` to the value of the request `id`.
+
+[Dependency telemetry](../../azure-monitor/app/data-model-complete.md#dependency) represents every outgoing operation, such as an HTTP call to another component. It also defines its own `id` that's globally unique. Request telemetry, initiated by this dependency call, uses this `id` as its `operation_parentId`.
+
+You can build a view of the distributed logical operation by using `operation_Id`, `operation_parentId`, and `request.id` with `dependency.id`. These fields also define the causality order of telemetry calls.
+
+In a microservices environment, traces from components can go to different storage items. Every component can have its own connection string in Application Insights. To get telemetry for the logical operation, Application Insights queries data from every storage item.
+
+When the number of storage items is large, you need a hint about where to look next. The Application Insights data model defines two fields to solve this problem: `request.source` and `dependency.target`. The first field identifies the component that initiated the dependency request. The second field identifies which component returned the response of the dependency call.
+
+For information on querying from multiple disparate instances by using the `app` query expression, see [app() expression in Azure Monitor query](../logs/app-expression.md#app-expression-in-azure-monitor-query).
+
+## Example
+
+Let's look at an example. An application called Stock Prices shows the current market price of a stock by using an external API called Stock. The Stock Prices application has a page called Stock page that the client web browser opens by using `GET /Home/Stock`. The application queries the Stock API by using the HTTP call `GET /api/stock/value`.
+
+You can analyze the resulting telemetry by running a query:
+
+```kusto
+(requests | union dependencies | union pageViews)
+| where operation_Id == "STYz"
+| project timestamp, itemType, name, id, operation_ParentId, operation_Id
+```
+
+In the results, all telemetry items share the root `operation_Id`. When an Ajax call is made from the page, a new unique ID (`qJSXU`) is assigned to the dependency telemetry, and the ID of the pageView is used as `operation_ParentId`. The server request then uses the Ajax ID as `operation_ParentId`.
+
+| itemType | name | ID | operation_ParentId | operation_Id |
+|||--|--|--|
+| pageView | Stock page | STYz | | STYz |
+| dependency | GET /Home/Stock | qJSXU | STYz | STYz |
+| request | GET Home/Stock | KqKwlrSt9PA= | qJSXU | STYz |
+| dependency | GET /api/stock/value | bBrf2L7mm2g= | KqKwlrSt9PA= | STYz |
+
+When the call `GET /api/stock/value` is made to an external service, you need to know the identity of that server so you can set the `dependency.target` field appropriately. When the external service doesn't support monitoring, `target` is set to the host name of the service. An example is `stock-prices-api.com`. But if the service identifies itself by returning a predefined HTTP header, `target` contains the service identity that allows Application Insights to build a distributed trace by querying telemetry from that service.
+
+## Correlation headers using W3C TraceContext
+
+Application Insights is transitioning to [W3C Trace-Context](https://w3c.github.io/trace-context/), which defines:
+
+- `traceparent`: Carries the globally unique operation ID and unique identifier of the call.
+- `tracestate`: Carries system-specific tracing context.
+
+The latest version of the Application Insights SDK supports the Trace-Context protocol, but you might need to opt in to it. (Backward compatibility with the previous correlation protocol supported by the Application Insights SDK is maintained.)
+
+The [correlation HTTP protocol, also called Request-Id](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md), is being deprecated. This protocol defines two headers:
+
+- `Request-Id`: Carries the globally unique ID of the call.
+- `Correlation-Context`: Carries the name-value pairs collection of the distributed trace properties.
+
+Application Insights also defines the [extension](https://github.com/lmolkov) for the correlation HTTP protocol. It uses `Request-Context` name-value pairs to propagate the collection of properties used by the immediate caller or callee. The Application Insights SDK uses this header to set the `dependency.target` and `request.source` fields.
+
+The [W3C Trace-Context](https://w3c.github.io/trace-context/) and Application Insights data models map in the following way:
+
+| Application Insights | W3C TraceContext |
+| |-|
+| `Id` of `Request` and `Dependency` | [parent-id](https://w3c.github.io/trace-context/#parent-id) |
+| `Operation_Id` | [trace-id](https://w3c.github.io/trace-context/#trace-id) |
+| `Operation_ParentId` | [parent-id](https://w3c.github.io/trace-context/#parent-id) of this span's parent span. This field must be empty if it's a root span.|
+
+For more information, see [Application Insights telemetry data model](../../azure-monitor/app/data-model-complete.md).
+
+### Enable W3C distributed tracing support for .NET apps
+
+W3C TraceContext-based distributed tracing is enabled by default in all recent
+.NET Framework/.NET Core SDKs, along with backward compatibility with legacy Request-Id protocol.
+
+### Enable W3C distributed tracing support for Java apps
+
+#### Java 3.0 agent
+
+ Java 3.0 agent supports W3C out of the box, and no more configuration is needed.
+
+#### Java SDK
+
+- **Incoming configuration**
+
+ For Java EE apps, add the following code to the `<TelemetryModules>` tag in *ApplicationInsights.xml*:
+
+ ```xml
+ <Add type="com.microsoft.applicationinsights.web.extensibility.modules.WebRequestTrackingTelemetryModule>
+ <Param name = "W3CEnabled" value ="true"/>
+ <Param name ="enableW3CBackCompat" value = "true" />
+ </Add>
+ ```
+
+ For Spring Boot apps, add these properties:
+
+ - `azure.application-insights.web.enable-W3C=true`
+ - `azure.application-insights.web.enable-W3C-backcompat-mode=true`
+
+- **Outgoing configuration**
+
+ Add the following code to *AI-Agent.xml*:
+
+ ```xml
+ <Instrumentation>
+ <BuiltIn enabled="true">
+ <HTTP enabled="true" W3C="true" enableW3CBackCompat="true"/>
+ </BuiltIn>
+ </Instrumentation>
+ ```
+
+ > [!NOTE]
+ > Backward compatibility mode is enabled by default, and the `enableW3CBackCompat` parameter is optional. Use it only when you want to turn backward compatibility off.
+ >
+ > Ideally, you'll' turn off this mode when all your services are updated to newer versions of SDKs that support the W3C protocol. We highly recommend that you move to these newer SDKs as soon as possible.
+
+It's important to make sure the incoming and outgoing configurations are exactly the same.
+
+### Enable W3C distributed tracing support for web apps
+
+This feature is in `Microsoft.ApplicationInsights.JavaScript`. It's disabled by default. To enable it, use `distributedTracingMode` config. AI_AND_W3C is provided for backward compatibility with any legacy services instrumented by Application Insights.
+
+- **[npm-based setup](./javascript.md#npm-based-setup)**
+
+ Add the following configuration:
+ ```JavaScript
+ distributedTracingMode: DistributedTracingModes.W3C
+ ```
+
+- **[Snippet-based setup](./javascript.md#snippet-based-setup)**
+
+ Add the following configuration:
+ ```
+ distributedTracingMode: 2 // DistributedTracingModes.W3C
+ ```
+> [!IMPORTANT]
+> To see all configurations required to enable correlation, see the [JavaScript correlation documentation](./javascript.md#enable-distributed-tracing).
+
+## Telemetry correlation in OpenCensus Python
+
+OpenCensus Python supports [W3C Trace-Context](https://w3c.github.io/trace-context/) without requiring extra configuration.
+
+For a reference, you can find the OpenCensus data model on [this GitHub page](https://github.com/census-instrumentation/opencensus-specs/tree/master/trace).
+
+### Incoming request correlation
+
+OpenCensus Python correlates W3C Trace-Context headers from incoming requests to the spans that are generated from the requests themselves. OpenCensus correlates automatically with integrations for these popular web application frameworks: Flask, Django, and Pyramid. You just need to populate the W3C Trace-Context headers with the [correct format](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format) and send them with the request.
+
+Explore this sample Flask application. Install Flask, OpenCensus, and the extensions for Flask and Azure.
+
+```shell
+
+pip install flask opencensus opencensus-ext-flask opencensus-ext-azure
+
+```
+
+You need to add your Application Insights connection string to the environment variable.
+
+```shell
+APPLICATIONINSIGHTS_CONNECTION_STRING=<appinsights-connection-string>
+```
+
+**Sample Flask Application**
+
+```python
+from flask import Flask
+from opencensus.ext.azure.trace_exporter import AzureExporter
+from opencensus.ext.flask.flask_middleware import FlaskMiddleware
+from opencensus.trace.samplers import ProbabilitySampler
+
+app = Flask(__name__)
+middleware = FlaskMiddleware(
+ app,
+ exporter=AzureExporter(
+ connection_string='<appinsights-connection-string>', # or set environment variable APPLICATION_INSIGHTS_CONNECTION_STRING
+ ),
+ sampler=ProbabilitySampler(rate=1.0),
+)
+
+@app.route('/')
+def hello():
+ return 'Hello World!'
+
+if __name__ == '__main__':
+ app.run(host='localhost', port=8080, threaded=True)
+```
+
+This code runs a sample Flask application on your local machine, listening to port `8080`. To correlate trace context, you send a request to the endpoint. In this example, you can use a `curl` command:
+
+```
+curl --header "traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01" localhost:8080
+```
+
+By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format), you can derive the following information:
+
+`version`: `00`
+
+`trace-id`: `4bf92f3577b34da6a3ce929d0e0e4736`
+
+`parent-id/span-id`: `00f067aa0ba902b7`
+
+`trace-flags`: `01`
+
+If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under **Logs (Analytics)** in the Azure Monitor Application Insights resource.
+
+![Screenshot that shows Request telemetry in Logs (Analytics).](./media/opencensus-python/0011-correlation.png)
+
+The `id` field is in the format `<trace-id>.<span-id>`, where `trace-id` is taken from the trace header that was passed in the request and `span-id` is a generated 8-byte array for this span.
+
+The `operation_ParentId` field is in the format `<trace-id>.<parent-id>`, where both `trace-id` and `parent-id` are taken from the trace header that was passed in the request.
+
+### Log correlation
+
+OpenCensus Python enables you to correlate logs by adding a trace ID, a span ID, and a sampling flag to log records. You add these attributes by installing OpenCensus [logging integration](https://pypi.org/project/opencensus-ext-logging/). The following attributes are added to Python `LogRecord` objects: `traceId`, `spanId`, and `traceSampled` (applicable only for loggers that are created after the integration).
+
+Install the OpenCensus logging integration:
+
+```console
+python -m pip install opencensus-ext-logging
+```
+
+**Sample application**
+
+```python
+import logging
+
+from opencensus.trace import config_integration
+from opencensus.trace.samplers import AlwaysOnSampler
+from opencensus.trace.tracer import Tracer
+
+config_integration.trace_integrations(['logging'])
+logging.basicConfig(format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')
+tracer = Tracer(sampler=AlwaysOnSampler())
+
+logger = logging.getLogger(__name__)
+logger.warning('Before the span')
+with tracer.span(name='hello'):
+ logger.warning('In the span')
+logger.warning('After the span')
+```
+When this code runs, the following prints in the console:
+```
+2019-10-17 11:25:59,382 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=0000000000000000 Before the span
+2019-10-17 11:25:59,384 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=70da28f5a4831014 In the span
+2019-10-17 11:25:59,385 traceId=c54cb1d4bbbec5864bf0917c64aeacdc spanId=0000000000000000 After the span
+```
+
+Notice that there's a `spanId` present for the log message that's within the span. The `spanId` is the same as that which belongs to the span named `hello`.
+
+You can export the log data by using `AzureLogHandler`. For more information, see [Set up Azure Monitor for your Python application](./opencensus-python.md#logs).
+
+We can also pass trace information from one component to another for proper correlation. For example, consider a scenario where there are two components, `module1` and `module2`. Module1 calls functions in Module2. To get logs from both `module1` and `module2` in a single trace, we can use the following approach:
++
+```python
+# module1.py
+import logging
+
+from opencensus.trace import config_integration
+from opencensus.trace.samplers import AlwaysOnSampler
+from opencensus.trace.tracer import Tracer
+from module_2 import function_1
+
+config_integration.trace_integrations(["logging"])
+logging.basicConfig(
+ format="%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s"
+)
+tracer = Tracer(sampler=AlwaysOnSampler())
+
+logger = logging.getLogger(__name__)
+logger.warning("Before the span")
+
+with tracer.span(name="hello"):
+ logger.warning("In the span")
+ function_1(logger, tracer)
+logger.warning("After the span")
+```
+
+```python
+# module_2.py
+import logging
+
+from opencensus.trace import config_integration
+from opencensus.trace.samplers import AlwaysOnSampler
+from opencensus.trace.tracer import Tracer
+
+config_integration.trace_integrations(["logging"])
+logging.basicConfig(
+ format="%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s"
+)
+logger = logging.getLogger(__name__)
+tracer = Tracer(sampler=AlwaysOnSampler())
++
+def function_1(logger=logger, parent_tracer=None):
+ if parent_tracer is not None:
+ tracer = Tracer(
+ span_context=parent_tracer.span_context,
+ sampler=AlwaysOnSampler(),
+ )
+ else:
+ tracer = Tracer(sampler=AlwaysOnSampler())
+
+ with tracer.span("function_1"):
+ logger.info("In function_1")
+
+```
+
+## Telemetry correlation in .NET
+
+Correlation is handled by default when onboarding an app. No special actions are required.
+
+* [Application Insights for ASP.NET Core applications](asp-net-core.md#application-insights-for-aspnet-core-applications)
+* [Configure Application Insights for your ASP.NET website](asp-net.md#configure-application-insights-for-your-aspnet-website)
+* [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md#application-insights-for-worker-service-applications-non-http-applications)
+
+.NET runtime supports distributed with the help of [Activity](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/ActivityUserGuide.md) and [DiagnosticSource](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/DiagnosticSourceUsersGuide.md)
+
+The Application Insights .NET SDK uses `DiagnosticSource` and `Activity` to collect and correlate telemetry.
+
+<a name="java-correlation"></a>
+## Telemetry correlation in Java
+
+[Java agent](./opentelemetry-enable.md?tabs=java) supports automatic correlation of telemetry. It automatically populates `operation_id` for all telemetry (like traces, exceptions, and custom events) issued within the scope of a request. It also propagates the correlation headers that were described earlier for service-to-service calls via HTTP, if the [Java SDK agent](deprecated-java-2x.md#monitor-dependencies-caught-exceptions-and-method-execution-times-in-java-web-apps) is configured.
+
+> [!NOTE]
+> Application Insights Java agent autocollects requests and dependencies for JMS, Kafka, Netty/Webflux, and more. For Java SDK, only calls made via Apache HttpClient are supported for the correlation feature. Automatic context propagation across messaging technologies like Kafka, RabbitMQ, and Azure Service Bus isn't supported in the SDK.
+
+To collect custom telemetry, you need to instrument the application with Java 2.6 SDK.
+
+### Role names
+
+You might want to customize the way component names are displayed in [Application Map](../../azure-monitor/app/app-map.md). To do so, you can manually set `cloud_RoleName` by taking one of the following actions:
+
+- For Application Insights Java, set the cloud role name as follows:
+
+ ```json
+ {
+ "role": {
+ "name": "my cloud role name"
+ }
+ }
+ ```
+
+ You can also set the cloud role name by using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`.
+
+- With Application Insights Java SDK 2.5.0 and later, you can specify `cloud_RoleName`
+ by adding `<RoleName>` to your *ApplicationInsights.xml* file:
+
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
+
+ ```xml
+ <?xml version="1.0" encoding="utf-8"?>
+ <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings" schemaVersion="2014-05-30">
+ <ConnectionString>InstrumentationKey=00000000-0000-0000-0000-000000000000</ConnectionString>
+ <RoleName>** Your role name **</RoleName>
+ ...
+ </ApplicationInsights>
+ ```
+
+- If you use Spring Boot with the Application Insights Spring Boot Starter, set your custom name for the application in the *application.properties* file:
+
+ `spring.application.name=<name-of-app>`
+
+You can also set the cloud role name via environment variable or system property. See [Configuring cloud role name](./java-standalone-config.md#cloud-role-name) for details.
+
+## Next steps
+
+- [Application map](./app-map.md)
+- Write [custom telemetry](../../azure-monitor/app/api-custom-events-metrics.md).
+- For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md).
+- Learn more about [setting cloud_RoleName](./app-map.md#set-or-override-cloud-role-name) for other SDKs.
+- Onboard all components of your microservice on Application Insights. Check out the [supported platforms](./app-insights-overview.md#supported-languages).
+- See the [data model](./data-model-complete.md) for Application Insights types.
+- Learn how to [extend and filter telemetry](./api-filtering-sampling.md).
+- Review the [Application Insights config reference](configuration-with-applicationinsights-config.md).
azure-monitor Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md
- Title: Distributed tracing in Azure Application Insights | Microsoft Docs
-description: This article provides information about Microsoft's support for distributed tracing through our partnership in the OpenCensus project.
-- Previously updated : 09/17/2018---
-# What is distributed tracing?
-
-The advent of modern cloud and [microservices](https://azure.com/microservices) architectures has given rise to simple, independently deployable services that can help reduce costs while increasing availability and throughput. These movements have made individual services easier to understand. But they've also made overall systems more difficult to reason about and debug.
-
-In monolithic architectures, we've gotten used to debugging with call stacks. Call stacks are brilliant tools for showing the flow of execution (Method A called Method B, which called Method C), along with details and parameters about each of those calls. This technique is great for monoliths or services running on a single process. But how do we debug when the call is across a process boundary, not simply a reference on the local stack?
-
-That's where distributed tracing comes in.
-
-Distributed tracing is the equivalent of call stacks for modern cloud and microservices architectures, with the addition of a simplistic performance profiler thrown in. In Azure Monitor, we provide two experiences for consuming distributed trace data. The first is our [transaction diagnostics](./transaction-diagnostics.md) view, which is like a call stack with a time dimension added in. The transaction diagnostics view provides visibility into one single transaction/request. It's helpful for finding the root cause of reliability issues and performance bottlenecks on a per-request basis.
-
-Azure Monitor also offers an [application map](./app-map.md) view, which aggregates many transactions to show a topological view of how the systems interact. The map view also shows what the average performance and error rates are.
-
-## Enable distributed tracing
-
-Enabling distributed tracing across the services in an application is as simple as adding the proper agent, SDK, or library to each service, based on the language the service was implemented in.
-
-## Enable via Application Insights through auto-instrumentation or SDKs
-
-The Application Insights agents and SDKs for .NET, .NET Core, Java, Node.js, and JavaScript all support distributed tracing natively. Instructions for installing and configuring each Application Insights SDK are available for:
-
-* [.NET](asp-net.md)
-* [.NET Core](asp-net-core.md)
-* [Java](./opentelemetry-enable.md?tabs=java)
-* [Node.js](../app/nodejs.md)
-* [JavaScript](./javascript.md#enable-distributed-tracing)
-* [Python](opencensus-python.md)
-
-With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in the [Dependency auto-collection documentation](asp-net-dependencies.md#dependency-auto-collection).
-
- Any technology also can be tracked manually with a call to [TrackDependency](./api-custom-events-metrics.md) on the [TelemetryClient](./api-custom-events-metrics.md).
-
-## Enable via OpenTelemetry
-
-Application Insights now supports distributed tracing through [OpenTelemetry](https://opentelemetry.io/). OpenTelemetry provides a vendor-neutral instrumentation to send traces, metrics, and logs to Application Insights. Initially, the OpenTelemetry community took on distributed tracing. Metrics and logs are still in progress.
-
-A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include distributed tracing. Our Java OpenTelemetry-based Azure Monitor offering is generally available and fully supported.
-
-The following pages consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project.
-
-* [.NET](opentelemetry-enable.md?tabs=net)
-* [Java](opentelemetry-enable.md?tabs=java)
-* [Node.js](opentelemetry-enable.md?tabs=nodejs)
-* [Python](opentelemetry-enable.md?tabs=python)
-
-## Enable via OpenCensus
-
-In addition to the Application Insights SDKs, Application Insights also supports distributed tracing through [OpenCensus](https://opencensus.io/). OpenCensus is an open-source, vendor-agnostic, single distribution of libraries to provide metrics collection and distributed tracing for services. It also enables the open-source community to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB. [Microsoft collaborates on OpenCensus with several other monitoring and cloud partners](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/).
-
-For more information on OpenCensus for Python, see [Set up Azure Monitor for your Python application](opencensus-python.md).
-
-The OpenCensus website maintains API reference documentation for [Python](https://opencensus.io/api/python/trace/usage.html), [Go](https://godoc.org/go.opencensus.io), and various guides for using OpenCensus.
-
-## Next steps
-
-* [OpenCensus Python usage guide](https://opencensus.io/api/python/trace/usage.html)
-* [Application map](./app-map.md)
-* [End-to-end performance monitoring](../app/tutorial-performance.md)
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
Title: Feature extensions for Application Insights JavaScript SDK (Click Analytics)
-description: Learn how to install and use JavaScript feature extensions (Click Analytics) for Application Insights JavaScript SDK.
+description: Learn how to install and use JavaScript feature extensions (Click Analytics) for the Application Insights JavaScript SDK.
ibiza
ms.devlang: javascript
-# Feature extensions for Application Insights JavaScript SDK (Click Analytics)
+# Feature extensions for the Application Insights JavaScript SDK (Click Analytics)
-App Insights JavaScript SDK feature extensions are extra features that can be added to the Application Insights JavaScript SDK to enhance its functionality.
+Application Insights JavaScript SDK feature extensions are extra features that can be added to the Application Insights JavaScript SDK to enhance its functionality.
-In this article, we cover the Click Analytics plugin that automatically tracks click events on web pages and uses data-* attributes on HTML elements to populate event telemetry.
+In this article, we cover the Click Analytics plug-in that automatically tracks click events on webpages and uses `data-*` attributes on HTML elements to populate event telemetry.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-## Getting started
+## Get started
-Users can set up the Click Analytics Auto-collection plugin via npm.
+Users can set up the Click Analytics Autocollection plug-in via npm.
### npm setup
-Install npm package:
+Install the npm package:
```bash npm install --save @microsoft/applicationinsights-clickanalytics-js @microsoft/applicationinsights-web
const appInsights = new ApplicationInsights({ config: configObj });
appInsights.loadAppInsights(); ```
-## Snippet Setup (ignore if using npm setup)
+## Snippet setup
+
+Ignore this setup if you use the npm setup.
```html <script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.6.2.min.js"></script>
appInsights.loadAppInsights();
useDefaultContentNameOrId: true } }
- // Application Insights Configuration
+ // Application Insights configuration
var configObj = { instrumentationKey: "YOUR INSTRUMENTATION KEY", extensions: [
appInsights.loadAppInsights();
</script> ```
-## How to effectively use the plugin
+## Use the plug-in
1. Telemetry data generated from the click events are stored as `customEvents` in the Application Insights section of the Azure portal.
-2. The `name` of the customEvent is populated based on the following rules:
- 1. The `id` provided in the `data-*-id` is used as the customEvent name. For example, if the clicked HTML element has the attribute "data-sample-id"="button1", then "button1" is the customEvent name.
- 2. If no such attribute exists and if the `useDefaultContentNameOrId` is set to `true` in the configuration, then the clicked element's HTML attribute `id` or content name of the element is used as the customEvent name. If both `id` and content name are present, precedence is given to `id`.
- 3. If `useDefaultContentNameOrId` is false, then the customEvent name is "not_specified".
+1. The `name` of the `customEvent` is populated based on the following rules:
+ 1. The `id` provided in the `data-*-id` is used as the `customEvent` name. For example, if the clicked HTML element has the attribute `"data-sample-id"="button1"`, then `"button1"` is the `customEvent` name.
+ 1. If no such attribute exists and if the `useDefaultContentNameOrId` is set to `true` in the configuration, the clicked element's HTML attribute `id` or content name of the element is used as the `customEvent` name. If both `id` and the content name are present, precedence is given to `id`.
+ 1. If `useDefaultContentNameOrId` is `false`, the `customEvent` name is `"not_specified"`.
> [!TIP]
- > We recommend settings `useDefaultContentNameOrId` to true for generating meaningful data.
-3. `parentDataTag` does two things:
- 1. If this tag is present, the plugin fetches the `data-*` attributes and values from all the parent HTML elements of the clicked element.
- 2. To improve efficiency, the plugin uses this tag as a flag, when encountered it stops itself from further processing the DOM (Document Object Model) upwards.
+ > We recommend setting `useDefaultContentNameOrId` to `true` for generating meaningful data.
+1. The tag `parentDataTag` does two things:
+ 1. If this tag is present, the plug-in fetches the `data-*` attributes and values from all the parent HTML elements of the clicked element.
+ 1. To improve efficiency, the plug-in uses this tag as a flag. When encountered, it stops itself from further processing the Document Object Model (DOM) upward.
> [!CAUTION]
- > Once `parentDataTag` is used, the SDK will begin looking for parent tags across your entire application and not just the HTML element where you used it.
-4. `customDataPrefix` provided by the user should always start with `data-`, for example `data-sample-`. In HTML, the `data-*` global attributes are called custom data attributes that allow proprietary information to be exchanged between the HTML and its DOM representation by scripts. Older browsers (Internet Explorer, Safari) drop attributes that it doesn't understand, unless they start with `data-`.
+ > After `parentDataTag` is used, the SDK begins looking for parent tags across your entire application and not just the HTML element where you used it.
+1. The `customDataPrefix` provided by the user should always start with `data-`. An example is `data-sample-`. In HTML, the `data-*` global attributes are called custom data attributes that allow proprietary information to be exchanged between the HTML and its DOM representation by scripts. Older browsers like Internet Explorer and Safari drop attributes they don't understand, unless they start with `data-`.
- The `*` in `data-*` may be replaced by any name following the [production rule of XML names](https://www.w3.org/TR/REC-xml/#NT-Name) with the following restrictions:
- - The name must not start with "xml", whatever case is used for these letters.
- - The name must not contain any semicolon (U+003A).
+ The asterisk (`*`) in `data-*` can be replaced by any name following the [production rule of XML names](https://www.w3.org/TR/REC-xml/#NT-Name) with the following restrictions:
+ - The name must not start with "xml," whatever case is used for the letters.
+ - The name must not contain a semicolon (U+003A).
- The name must not contain capital letters.
-## What data does the plugin collect
+## What data does the plug-in collect?
-The following are some of the key properties captured by default when the plugin is enabled:
+The following key properties are captured by default when the plug-in is enabled.
-### Custom Event Properties
+### Custom event properties
| Name | Description | Sample | | | |--|
-| Name | The `name` of the customEvent. More info on how name gets populated is shown [here](#how-to-effectively-use-the-plugin).| About |
+| Name | The name of the custom event. More information on how a name gets populated is shown in the [preceding section](#use-the-plug-in).| About |
| itemType | Type of event. | customEvent |
-|sdkVersion | Version of Application Insights SDK along with click plugin|JavaScript:2.6.2_ClickPlugin2.6.2|
+|sdkVersion | Version of Application Insights SDK along with click plug-in.|JavaScript:2.6.2_ClickPlugin2.6.2|
-### Custom Dimensions
+### Custom dimensions
| Name | Description | Sample | | | |--|
-| actionType | Action type that caused the click event. Can be left or right click. | CL |
+| actionType | Action type that caused the click event. It can be a left or right click. | CL |
| baseTypeSource | Base Type source of the custom event. | ClickEvent | | clickCoordinates | Coordinates where the click event is triggered. | 659X47 | | content | Placeholder to store extra `data-*` attributes and values. | [{sample1:value1, sample2:value2}] | | pageName | Title of the page where the click event is triggered. | Sample Title |
-| parentId | ID or name of the parent element | navbarContainer |
+| parentId | ID or name of the parent element. | navbarContainer |
-### Custom Measurements
+### Custom measurements
| Name | Description | Sample | | | |--|
-| timeToAction | Time taken in milliseconds for the user to click the element since initial page load | 87407 |
+| timeToAction | Time taken in milliseconds for the user to click the element since the initial page load. | 87407 |
## Configuration
The following are some of the key properties captured by default when the plugin
| dataTags | [ICustomDataTags](#icustomdatatags)| Null | Custom Data Tags provided to override default tags used to capture click data. | | urlCollectHash | Boolean | False | Enables the logging of values after a "#" character of the URL. | | urlCollectQuery | Boolean | False | Enables the logging of the query string of the URL. |
-| behaviorValidator | Function | Null | Callback function to use for the `data-*-bhvr` value validation. For more information, go to [behaviorValidator section](#behaviorvalidator).|
-| defaultRightClickBhvr | String (or) number | '' | Default Behavior value when Right Click event has occurred. This value is overridden if the element has the `data-*-bhvr` attribute. |
+| behaviorValidator | Function | Null | Callback function to use for the `data-*-bhvr` value validation. For more information, see the [behaviorValidator section](#behaviorvalidator).|
+| defaultRightClickBhvr | String (or) number | '' | Default behavior value when a right-click event has occurred. This value is overridden if the element has the `data-*-bhvr` attribute. |
| dropInvalidEvents | Boolean | False | Flag to drop events that don't have useful click data. | ### IValueCallback | Name | Type | Default | Description | | | -- | - | |
-| pageName | Function | Null | Function to override the default pageName capturing behavior. |
-| pageActionPageTags | Function | Null | A callback function to augment the default pageTags collected during pageAction event. |
-| contentName | Function | Null | A callback function to populate customized contentName. |
+| pageName | Function | Null | Function to override the default `pageName` capturing behavior. |
+| pageActionPageTags | Function | Null | A callback function to augment the default `pageTags` collected during a `pageAction` event. |
+| contentName | Function | Null | A callback function to populate customized `contentName`. |
### ICustomDataTags
-| Name | Type | Default | Default Tag to Use in HTML | Description |
+| Name | Type | Default | Default tag to use in HTML | Description |
|||--|-|-|
-| useDefaultContentNameOrId | Boolean | False | N/A |Collects standard HTML attribute for contentName when a particular element isn't tagged with default customDataPrefix or when customDataPrefix isn't provided by user. |
+| useDefaultContentNameOrId | Boolean | False | N/A |Collects standard HTML attribute for `contentName` when a particular element isn't tagged with default `customDataPrefix` or when `customDataPrefix` isn't provided by user. |
| customDataPrefix | String | `data-` | `data-*`| Automatic capture content name and value of elements that are tagged with provided prefix. For example, `data-*-id`, `data-<yourcustomattribute>` can be used in the HTML tags. |
-| aiBlobAttributeTag | String | `ai-blob` | `data-ai-blob`| Plugin supports a JSON blob attribute instead of individual `data-*` attributes. |
-| metaDataPrefix | String | Null | N/A | Automatic capture HTML Head's meta element name and content with provided prefix when capture. For example, `custom-` can be used in the HTML meta tag. |
-| captureAllMetaDataContent | Boolean | False | N/A | Automatic capture all HTML Head's meta element names and content. Default is false. If enabled it overrides provided metaDataPrefix. |
+| aiBlobAttributeTag | String | `ai-blob` | `data-ai-blob`| Plug-in supports a JSON blob attribute instead of individual `data-*` attributes. |
+| metaDataPrefix | String | Null | N/A | Automatic capture HTML Head's meta element name and content with provided prefix when captured. For example, `custom-` can be used in the HTML meta tag. |
+| captureAllMetaDataContent | Boolean | False | N/A | Automatic capture all HTML Head's meta element names and content. Default is false. If enabled, it overrides provided `metaDataPrefix`. |
| parentDataTag | String | Null | N/A | Stops traversing up the DOM to capture content name and value of elements when encountered with this tag. For example, `data-<yourparentDataTag>` can be used in the HTML tags.|
-| dntDataTag | String | `ai-dnt` | `data-ai-dnt`| HTML elements with this attribute are ignored by the plugin for capturing telemetry data.|
+| dntDataTag | String | `ai-dnt` | `data-ai-dnt`| HTML elements with this attribute are ignored by the plug-in for capturing telemetry data.|
### behaviorValidator
-The behaviorValidator functions automatically check that tagged behaviors in code conform to a pre-defined list. It ensures tagged behaviors are consistent with your enterprise's established taxonomy. It isn't required or expected that most Azure Monitor customers use these functions, but they're available for advanced scenarios. There are three different behaviorValidator callback functions exposed as part of this extension. However, users can use their own callback functions if the exposed functions don't solve your requirement. The intent is to bring your own behaviors data structure, the plugin uses this validator function while extracting the behaviors from the data tags.
+The `behaviorValidator` functions automatically check that tagged behaviors in code conform to a predefined list. The functions ensure that tagged behaviors are consistent with your enterprise's established taxonomy. It isn't required or expected that most Azure Monitor customers use these functions, but they're available for advanced scenarios.
+
+Three different `behaviorValidator` callback functions are exposed as part of this extension. You can also use your own callback functions if the exposed functions don't solve your requirement. The intent is to bring your own behavior's data structure. The plug-in uses this validator function while extracting the behaviors from the data tags.
| Name | Description | | - | --|
-| BehaviorValueValidator | Use this callback function if your behaviors data structure is an array of strings.|
-| BehaviorMapValidator | Use this callback function if your behaviors data structure is a dictionary. |
-| BehaviorEnumValidator | Use this callback function if your behaviors data structure is an Enum. |
+| BehaviorValueValidator | Use this callback function if your behavior's data structure is an array of strings.|
+| BehaviorMapValidator | Use this callback function if your behavior's data structure is a dictionary. |
+| BehaviorEnumValidator | Use this callback function if your behavior's data structure is an Enum. |
#### Sample usage with behaviorValidator
var behaviorMap = {
SORT: 6, // Sorting content EXPAND: 7, // Expanding content or content container REDUCE: 8, // Sorting content
- CONTEXTMENU: 9, // Context Menu
+ CONTEXTMENU: 9, // Context menu
TAB: 10, // Tab control COPY: 11, // Copy the contents of a page
- EXPERIMENTATION: 12, // Used to identify a third party experimentation event
+ EXPERIMENTATION: 12, // Used to identify a third-party experimentation event
PRINT: 13, // User printed page
- SHOW: 14, // Displaying an overlay
- HIDE: 15, // Hiding an overlay
- MAXIMIZE: 16, // Maximizing an overlay
+ SHOW: 14, // Displaying an overlay
+ HIDE: 15, // Hiding an overlay
+ MAXIMIZE: 16, // Maximizing an overlay
MINIMIZE: 17, // Minimizing an overlay
- BACKBUTTON: 18, // Clicking the back button
+ BACKBUTTON: 18, // Clicking the back button
/////////////////////////////////////////////////////////////////////////////////////////////////// // Scenario Process [20-39]
var behaviorMap = {
// Feedback [140-159] /////////////////////////////////////////////////////////////////////////////////////////////////// VOTE: 140, // Rating content or voting for content
- SURVEYCHECKPOINT: 145, // reaching the survey page/form
+ SURVEYCHECKPOINT: 145, // Reaching the survey page/form
/////////////////////////////////////////////////////////////////////////////////////////////////// // Registration, Contact [160-179]
var behaviorMap = {
/////////////////////////////////////////////////////////////////////////////////////////////////// // Trial [200-209] ///////////////////////////////////////////////////////////////////////////////////////////////////
- TRIALSIGNUP: 200, // Signing-up for a trial
+ TRIALSIGNUP: 200, // Signing up for a trial
TRIALINITIATE: 201, // Initiating a trial /////////////////////////////////////////////////////////////////////////////////////////////////// // Signup [210-219] ///////////////////////////////////////////////////////////////////////////////////////////////////
- SIGNUP: 210, // Signing-up for a notification or service
- FREESIGNUP: 211, // Signing-up for a free service
+ SIGNUP: 210, // Signing up for a notification or service
+ FREESIGNUP: 211, // Signing up for a free service
///////////////////////////////////////////////////////////////////////////////////////////////////
- // Referals [220-229]
+ // Referrals [220-229]
/////////////////////////////////////////////////////////////////////////////////////////////////// PARTNERREFERRAL: 220, // Navigating to a partner's web property /////////////////////////////////////////////////////////////////////////////////////////////////// // Intents [230-239] ///////////////////////////////////////////////////////////////////////////////////////////////////
- LEARNLOWFUNNEL: 230, // Engaging in learning behavior on a commerce page (ex. "Learn more click")
- LEARNHIGHFUNNEL: 231, // Engaging in learning behavior on a non-commerce page (ex. "Learn more click")
+ LEARNLOWFUNNEL: 230, // Engaging in learning behavior on a commerce page (for example, "Learn more click")
+ LEARNHIGHFUNNEL: 231, // Engaging in learning behavior on a non-commerce page (for example, "Learn more click")
SHOPPINGINTENT: 232, // Shopping behavior prior to landing on a commerce page ///////////////////////////////////////////////////////////////////////////////////////////////////
var behaviorMap = {
/////////////////////////////////////////////////////////////////////////////////////////////////// VIDEOSTART: 240, // Initiating a video VIDEOPAUSE: 241, // Pausing a video
- VIDEOCONTINUE: 242, // Pausing or resuming a video.
- VIDEOCHECKPOINT: 243, // Capturing predetermined video percentage complete.
- VIDEOJUMP: 244, // Jumping to a new video location.
+ VIDEOCONTINUE: 242, // Pausing or resuming a video
+ VIDEOCHECKPOINT: 243, // Capturing predetermined video percentage complete
+ VIDEOJUMP: 244, // Jumping to a new video location
VIDEOCOMPLETE: 245, // Completing a video (or % proxy) VIDEOBUFFERING: 246, // Capturing a video buffer event VIDEOERROR: 247, // Capturing a video error
var behaviorMap = {
VIDEOUNFULLSCREEN: 251, // Making a video return from full screen to original size VIDEOREPLAY: 252, // Making a video replay VIDEOPLAYERLOAD: 253, // Loading the video player
- VIDEOPLAYERCLICK: 254, // Click on a button within the interactive player
- VIDEOVOLUMECONTROL: 255, // Click on video volume control
+ VIDEOPLAYERCLICK: 254, // Click on a button within the interactive player
+ VIDEOVOLUMECONTROL: 255, // Click on video volume control
VIDEOAUDIOTRACKCONTROL: 256, // Click on audio control within a video
- VIDEOCLOSEDCAPTIONCONTROL: 257, // Click on the closed caption control
- VIDEOCLOSEDCAPTIONSTYLE: 258, // Click to change closed caption style
- VIDEORESOLUTIONCONTROL: 259, // Click to change resolution
+ VIDEOCLOSEDCAPTIONCONTROL: 257, // Click on the closed-caption control
+ VIDEOCLOSEDCAPTIONSTYLE: 258, // Click to change closed-caption style
+ VIDEORESOLUTIONCONTROL: 259, // Click to change resolution
/////////////////////////////////////////////////////////////////////////////////////////////////// // Advertisement Engagement [280-299]
var behaviorMap = {
ADSTART: 285, // Ad start ADCOMPLETE: 286, // Ad complete ADSKIP: 287, // Ad skipped
- ADTIMEOUT: 288, // Ad timed-out
+ ADTIMEOUT: 288, // Ad timed out
OTHER: 300 // Other };
var appInsights = new Microsoft.ApplicationInsights.ApplicationInsights({
appInsights.loadAppInsights(); ```
-## Enable Correlation
+## Enable correlation
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation, reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
+JavaScript correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
## Sample app
-[Simple web app with Click Analytics Auto-collection Plugin enabled](https://go.microsoft.com/fwlink/?linkid=2152871).
+[Simple web app with the Click Analytics Autocollection Plug-in enabled](https://go.microsoft.com/fwlink/?linkid=2152871)
## Next steps -- Check out the [documentation on utilizing HEART Workbook](usage-heart.md) for expanded product analytics.-- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto-Collection Plugin.-- Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.-- Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). For more information, see [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871).-- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data.
+- See the [documentation on utilizing HEART workbook](usage-heart.md) for expanded product analytics.
+- See the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection Plug-in.
+- Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.
+- Find click data under the content field within the `customDimensions` attribute in the `CustomEvents` table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). For more information, see a [sample app](https://go.microsoft.com/fwlink/?linkid=2152871).
+- Build a [workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data.
azure-monitor Opencensus Python Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-dependency.md
conn.close()
Track your outgoing Django requests with the OpenCensus `django` integration. > [!NOTE]
-> The only outgoing Django requests that are tracked are calls made to a database. For requests made to the Django application, see [incoming requests](./opencensus-python-request.md#tracking-django-applications).
+> The only outgoing Django requests that are tracked are calls made to a database. For requests made to the Django application, see [incoming requests](./opencensus-python-request.md#track-django-applications).
Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-django/) and add the following line to the `MIDDLEWARE` section in the Django `settings.py` file.
azure-monitor Opencensus Python Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python-request.md
Title: Incoming Request Tracking in Azure Application Insights with OpenCensus Python | Microsoft Docs
+ Title: Incoming request tracking in Application Insights with OpenCensus Python | Microsoft Docs
description: Monitor request calls for your Python apps via OpenCensus Python. Last updated 03/22/2023
# Track incoming requests with OpenCensus Python
-OpenCensus Python and its integrations collect incoming request data. Track incoming request data sent to your web applications built on top of the popular web frameworks `django`, `flask` and `pyramid`. Application Insights receives the data as `requests` telemetry
+OpenCensus Python and its integrations collect incoming request data. You can track incoming request data sent to your web applications built on top of the popular web frameworks Django, Flask, and Pyramid. Application Insights receives the data as `requests` telemetry.
-First, instrument your Python application with latest [OpenCensus Python SDK](./opencensus-python.md).
+First, instrument your Python application with the latest [OpenCensus Python SDK](./opencensus-python.md).
-## Tracking Django applications
+## Track Django applications
-1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-django/) and instrument your application with the `django` middleware. Incoming requests sent to your `django` application will be tracked.
+1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-django/). Instrument your application with the `django` middleware. Incoming requests sent to your Django application are tracked.
-2. Include `opencensus.ext.django.middleware.OpencensusMiddleware` in your `settings.py` file under `MIDDLEWARE`.
+1. Include `opencensus.ext.django.middleware.OpencensusMiddleware` in your `settings.py` file under `MIDDLEWARE`.
```python MIDDLEWARE = (
First, instrument your Python application with latest [OpenCensus Python SDK](./
) ```
-3. Make sure AzureExporter is configured properly in your `settings.py` under `OPENCENSUS`. For requests from urls that you don't wish to track, add them to `EXCLUDELIST_PATHS`.
-
+1. Make sure AzureExporter is configured properly in your `settings.py` under `OPENCENSUS`. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
```python OPENCENSUS = {
First, instrument your Python application with latest [OpenCensus Python SDK](./
} ```
-You can find a Django sample application in the sample Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
-## Tracking Flask applications
+You can find a Django sample application in the [Azure Monitor OpenCensus Python samples repository](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/django_sample).
+
+## Track Flask applications
-1. Download and install `opencensus-ext-flask` from [PyPI](https://pypi.org/project/opencensus-ext-flask/) and instrument your application with the `flask` middleware. Incoming requests sent to your `flask` application will be tracked.
+1. Download and install `opencensus-ext-flask` from [PyPI](https://pypi.org/project/opencensus-ext-flask/). Instrument your application with the `flask` middleware. Incoming requests sent to your Flask application are tracked.
```python
You can find a Django sample application in the sample Azure Monitor OpenCensus
```
-2. You can also configure your `flask` application through `app.config`. For requests from urls that you don't wish to track, add them to `EXCLUDELIST_PATHS`.
+1. You can also configure your `flask` application through `app.config`. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
```python app.config['OPENCENSUS'] = {
You can find a Django sample application in the sample Azure Monitor OpenCensus
``` > [!NOTE]
- > To run Flask under uWSGI in a Docker environment, you must first add `lazy-apps = true` to the uWSGI configuration file (uwsgi.ini). For more information, see the [issue description](https://github.com/census-instrumentation/opencensus-python/issues/660).
-
-You can find a Flask sample application that tracks requests in the Azure Monitor OpenCensus Python samples repository located [here](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/flask_sample).
+ > To run Flask under uWSGI in a Docker environment, you must first add `lazy-apps = true` to the uWSGI configuration file (uwsgi.ini). For more information, see the [issue description](https://github.com/census-instrumentation/opencensus-python/issues/660).
-## Tracking Pyramid applications
+You can find a Flask sample application that tracks requests in the [Azure Monitor OpenCensus Python samples repository](https://github.com/Azure-Samples/azure-monitor-opencensus-python/tree/master/azure_monitor/flask_sample).
-1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-pyramid/) and instrument your application with the `pyramid` tween. Incoming requests sent to your `pyramid` application will be tracked.
+## Track Pyramid applications
+
+1. Download and install `opencensus-ext-django` from [PyPI](https://pypi.org/project/opencensus-ext-pyramid/). Instrument your application with the `pyramid` tween. Incoming requests sent to your Pyramid application are tracked.
```python def main(global_config, **settings):
You can find a Flask sample application that tracks requests in the Azure Monito
'.pyramid_middleware.OpenCensusTweenFactory') ```
-2. You can configure your `pyramid` tween directly in the code. For requests from urls that you don't wish to track, add them to `EXCLUDELIST_PATHS`.
+1. You can configure your `pyramid` tween directly in the code. For requests from URLs that you don't want to track, add them to `EXCLUDELIST_PATHS`.
```python settings = {
You can find a Flask sample application that tracks requests in the Azure Monito
config = Configurator(settings=settings) ```
-## Tracking FastAPI applications
+## Track FastAPI applications
-OpenCensus doesn't have an extension for FastAPI. To write your own FastAPI middleware, complete the following steps:
+OpenCensus doesn't have an extension for FastAPI. To write your own FastAPI middleware:
-1. The following dependencies are required:
+1. The following dependencies are required:
- [fastapi](https://pypi.org/project/fastapi/) - [uvicorn](https://pypi.org/project/uvicorn/)
-
+ In a production setting, we recommend that you deploy [uvicorn with gunicorn](https://www.uvicorn.org/deployment/#gunicorn).
-2. Add [FastAPI middleware](https://fastapi.tiangolo.com/tutorial/middleware/). Make sure that you set the span kind server: `span.span_kind = SpanKind.SERVER`.
+1. Add [FastAPI middleware](https://fastapi.tiangolo.com/tutorial/middleware/). Make sure that you set the span kind server: `span.span_kind = SpanKind.SERVER`.
-3. Run your application. Calls made to your FastAPI application should be automatically tracked and telemetry should be logged directly to Azure Monitor.
+1. Run your application. Calls made to your FastAPI application should be automatically tracked. Telemetry should be logged directly to Azure Monitor.
```python # Opencensus imports
OpenCensus doesn't have an extension for FastAPI. To write your own FastAPI midd
* [Application Map](./app-map.md) * [Availability](./availability-overview.md) * [Search](./diagnostic-search.md)
-* [Log (Analytics) query](../logs/log-query-overview.md)
-* [Transaction diagnostics](./transaction-diagnostics.md)
-
+* [Log Analytics query](../logs/log-query-overview.md)
+* [Transaction diagnostics](./transaction-diagnostics.md)
azure-monitor Resource Manager App Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resource-manager-app-resource.md
Title: Resource Manager template samples for Application Insights Resources
+ Title: Resource Manager template samples for Application Insights resources
description: Sample Azure Resource Manager templates to deploy Application Insights resources in Azure Monitor. Last updated 11/14/2022
resource component 'Microsoft.Insights/components@2020-02-02' = {
## Next steps
-* [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about classic Application Insights resources](/previous-versions/azure/azure-monitor/app/create-new-resource).
-* [Learn more about workspace-based Application Insights resources](../app/create-workspace-resource.md).
+* Get other [sample templates for Azure Monitor](../resource-manager-samples.md).
+* Learn more about [classic Application Insights resources](/previous-versions/azure/azure-monitor/app/create-new-resource).
+* Learn more about [workspace-based Application Insights resources](../app/create-workspace-resource.md).
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
Title: Statsbeat in Azure Application Insights | Microsoft Docs
+ Title: Statsbeat in Application Insights | Microsoft Docs
description: Statistics about Application Insights SDKs and Auto-Instrumentation Last updated 08/24/2022
ms.reviwer: heya
-# Statsbeat in Azure Application Insights
+# Statsbeat in Application Insights
-Statsbeat collects essential and non-essential [custom metric](../essentials/metrics-custom-overview.md) about Application Insights SDKs and auto-instrumentation. Statsbeat serves three benefits for Azure Monitor Application Insights customers:
-- Service Health and Reliability (outside-in monitoring of connectivity to ingestion endpoint)-- Support Diagnostics (self-help insights and CSS insights)-- Product Improvement (insights for design optimizations)
+Statsbeat collects essential and nonessential [custom metrics](../essentials/metrics-custom-overview.md) about Application Insights SDKs and autoinstrumentation. Statsbeat serves three benefits for Azure Monitor Application Insights customers:
+- Service health and reliability (outside-in monitoring of connectivity to ingestion endpoint)
+- Support diagnostics (self-help insights and CSS insights)
+- Product improvement (insights for design optimizations)
-Statsbeat data is stored in a Microsoft data store. It doesn't impact customers' overall monitoring volume and cost.
+Statsbeat data is stored in a Microsoft data store. It doesn't affect customers' overall monitoring volume and cost.
-Statsbeat doesn't support [Azure Private Link](../../automation/how-to/private-link-security.md).
+Statsbeat doesn't support [Azure Private Link](../../automation/how-to/private-link-security.md).
## What data does Statsbeat collect?
-Statsbeat collects essential and non-essential metrics.
+Statsbeat collects essential and nonessential metrics.
## Supported languages | C# | Java | JavaScript | Node.js | Python | ||--||--|--|
-| Currently Not supported | Supported | Currently Not supported | Supported | Supported |
+| Currently not supported | Supported | Currently not supported | Supported | Supported |
-## Supported EU Regions
+## Supported EU regions
#### [Java](#tab/eu-java) Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
-| Geo Name | Region Name |
+| Geo name | Region name |
||| | Europe | North Europe | | Europe | West Europe |
Statsbeat supports EU Data Boundary for Application Insights resources in the fo
Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
-| Geo Name | Region Name |
+| Geo name | Region name |
||| | Europe | North Europe | | Europe | West Europe |
Statsbeat supports EU Data Boundary for Application Insights resources in the fo
Statsbeat supports EU Data Boundary for Application Insights resources in the following regions:
-| Geo Name | Region Name |
+| Geo name | Region name |
||| | Europe | North Europe | | Europe | West Europe |
Statsbeat supports EU Data Boundary for Application Insights resources in the fo
| United Kingdom | United Kingdom South | | United Kingdom | United Kingdom West | - ### Essential Statsbeat #### Network Statsbeat
-|Metric Name|Unit|Supported dimensions|
+|Metric name|Unit|Supported dimensions|
|--|--|--| |Request Success Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`| |Requests Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`| |Request Duration|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`|
-|Retry Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, , `Status Code`|
+|Retry Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`|
|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`| |Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Exception Type`| [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]+ #### Attach Statsbeat
-|Metric Name|Unit|Supported dimensions|
+|Metric name|Unit|Supported dimensions|
|--|--|--| |Attach|Count| `Resource Provider`, `Resource Provider Identifier`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`| #### Feature Statsbeat
-|Metric Name|Unit|Supported dimensions|
+|Metric name|Unit|Supported dimensions|
|--|--|--| |Feature|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Feature`, `Type`, `Operating System`, `Language`, `Version`|
-### Non-essential Statsbeat
+### Nonessential Statsbeat
-Track the Disk I/O failure when using disk persistence for reliable telemetry.
+Track the Disk I/O failure when you use disk persistence for reliable telemetry.
-|Metric Name|Unit|Supported dimensions|
+|Metric name|Unit|Supported dimensions|
|--|--|--| |Read Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`| |Write Failure Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`|
Track the Disk I/O failure when using disk persistence for reliable telemetry.
#### [Java](#tab/java)
-To disable non-essential Statsbeat, add the below configuration to your config file.
+To disable nonessential Statsbeat, add the following configuration to your config file:
```json {
To disable non-essential Statsbeat, add the below configuration to your config f
} ```
-You can also disable this feature by setting the environment variable `APPLICATIONINSIGHTS_STATSBEAT_DISABLED` to true (which will then take precedence over disabled specified in the json configuration).
+You can also disable this feature by setting the environment variable `APPLICATIONINSIGHTS_STATSBEAT_DISABLED` to `true`. This setting then takes precedence over `disabled`, which is specified in the JSON configuration.
#### [Node](#tab/node)
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
These dimensions are measured independently, but they interact with each other a
*Use the [Click Analytics Auto collection plugin](javascript-feature-extensions.md) via npm to emit these attributes. >[!TIP]
-> To understand how to effectively use the Click Analytics plugin, please refer to [this section](javascript-feature-extensions.md#how-to-effectively-use-the-plugin).
+> To understand how to effectively use the Click Analytics plugin, please refer to [this section](javascript-feature-extensions.md#use-the-plug-in).
### Open the workbook The workbook can be found in the gallery under 'public templates'. The workbook will be shown in the section titled **"Product Analytics using the Click Analytics Plugin"** as shown in the following image:
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Azure Monitor collects these types of data:
|Data Type|Description and subtypes| ||--|
-|Application|Data about the performance and functionality of your application code on any platform.|
+|Application|Application performance, health, and activity data.|
|Infrastructure|**Container** - Data about containers, such as [Azure Kubernetes Service](../aks/intro-kubernetes.md), [Prometheus](./essentials/prometheus-metrics-overview.md), and the applications running inside containers.<br><br>**Operating system** - Data about the guest operating system on which your application is running.|
-|Azure Platform|**Azure resource** - Data about the operation of an Azure resource from inside the resource, including changes. Resource Logs are one example. <br><br>**Azure subscription** - The operation and management of an Azure subscription, and data about the health and operation of Azure itself. The activity log is one example.<br><br>**Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory.<br>| Data sent into the Azure Monitor data platform using the Azure Monitor REST API. |
-|Custom Sources| Data which gets into the system using Azure Monitor REST API.
+|Azure Platform <br><br> Data sent into the Azure Monitor data platform using the Azure Monitor REST API. |**Azure resource** - Data about the operation of an Azure resource from inside the resource, including changes. Resource Logs are one example. <br><br>**Azure subscription** - The operation and management of an Azure subscription, and data about the health and operation of Azure itself. The activity log is one example.<br><br>**Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory.<br> |
+|Custom Sources| Data which gets into the system using Azure Monitor REST API. |
For detailed information about each of the data sources, see [data sources](./data-sources.md).
Click on the picture to see a larger version of the data platform diagram in con
||| |[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus metrics](essentials/prometheus-metrics-overview.md).| |[Azure Monitor Logs](logs/data-platform-logs.md)|Logs are recorded system events. Logs can contain different types of data, be structured or free-form text, and they contain a timestamp. Azure Monitor stores structured and unstructured log data of all types in [Azure Monitor Logs](./logs/data-platform-logs.md). You can route data to [Log Analytics workspaces](./logs/log-analytics-overview.md) for querying and analysis.|
-|Traces|[Distributed traces](app/distributed-tracing.md) identify the series of related events that follow a user request through a distributed system. A trace measures the operation and performance of your application across the entire set of components in your system. Traces can be used to determine the behavior of application code and the performance of different transactions. Azure Monitor gets distributed trace data from the Application Insights SDK. The trace data is stored in a separate workspace in Azure Monitor Logs.|
+|Traces|[Distributed tracing](app/distributed-tracing.md) allows you to see the path of a request as it travels through different services and components. Azure Monitor gets distributed trace data from [instrumented applications](app/app-insights-overview.md#how-do-i-instrument-an-application). The trace data is stored in a separate workspace in Azure Monitor Logs.|
|Changes|Changes are a series of events in your application and resources. They're tracked and stored when you use the [Change Analysis](./change/change-analysis.md) service, which uses [Azure Resource Graph](../governance/resource-graph/overview.md) as its store. Change Analysis helps you understand which changes, such as deploying updated code, may have caused issues in your systems.|
+Distributed tracing is a technique used to trace requests as they travel through a distributed system. It allows you to see the path of a request as it travels through different services and components. It helps you to identify performance bottlenecks and troubleshoot issues in a distributed system.
For less expensive, long-term archival of monitoring data for auditing or compliance purposes, you can export to [Azure Storage](/azure/storage/).
Click on the picture to see a larger version of the data collection diagram in c
|Collection method|Description | |||
-|[Application SDK](app/app-insights-overview.md)| You can add the Application Insights SDK to your application code to receive, store, and explore your monitoring data. The SDK preprocesses telemetry and metrics before sending the data to Azure where it's ingested and processed further before being stored in Azure Monitor Logs.|
-|[Agents](agents/agents-overview.md)|Agents can collect monitoring data from applications, the guest operating system of Azure, and hybrid virtual machines.|
+|[Application instrumentation](app/app-insights-overview.md)| Application Insights is enabled through either [Auto-Instrumentation (agent)](app/codeless-overview.md#what-is-auto-instrumentation-for-azure-monitor-application-insights) or by adding the Application Insights SDK to your application code. For more information, reference [How do I instrument an application?](app/app-insights-overview.md#how-do-i-instrument-an-application).|
+|[Agents](agents/agents-overview.md)|Agents can collect monitoring data from the guest operating system of Azure and hybrid virtual machines.|
|[Data collection rules](essentials/data-collection-rule-overview.md)|Use data collection rules to specify what data should be collected, how to transform it, and where to send it.| |Internal| Data is automatically sent to a destination without user configuration. | |[Diagnostic settings](essentials/diagnostic-settings.md)|Use diagnostic settings to determine where to send resource log and activity log data on the data platform.|
The following table describes some of the larger insights:
|Insight |Description | |||
-|[Application Insights](app/app-insights-overview.md)|Application Insights takes advantage of the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. You can use it to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes.|
+|[Application Insights](app/app-insights-overview.md)|Application Insights monitors the availability, performance, and usage of your web applications.|
|[Container Insights](containers/container-insights-overview.md)|Container Insights gives you performance visibility into container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. Container Insights collects container logs and metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux.| |[VM Insights](vm/vminsights-overview.md)|VM Insights monitors your Azure VMs. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes. The solution includes support for monitoring performance and application dependencies for VMs hosted on-premises or another cloud provider.| |[Network Insights](../network-watcher/network-insights-overview.md)|Network Insights provides a comprehensive and visual representation through topologies, of health and metrics for all deployed network resources, without requiring any configuration. It also provides access to network monitoring capabilities like Connection Monitor, flow logging for network security groups (NSGs), and Traffic Analytics as well as other diagnostic features. |
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
[!INCLUDE [azure-front-door-service-limits](../../../includes/front-door-limits.md)]
+### Azure Network Watcher limits
++ ### Azure Route Server limits [!INCLUDE [Azure Route Server Limits](../../../includes/route-server-limits.md)]
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
[!INCLUDE [nat-gateway-limits](../../../includes/azure-nat-gateway-limits.md)]
-### Network Watcher limits
-- ### Private Link limits [!INCLUDE [private-link-limits](../../../includes/private-link-limits.md)]
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
Title: Troubleshoot common Azure deployment errors
description: Troubleshoot common Azure deployment errors for resources that are deployed with Bicep files or Azure Resource Manager templates (ARM templates). tags: top-support-issue Previously updated : 02/21/2023 Last updated : 03/30/2023 # Troubleshoot common Azure deployment errors
If your error code isn't listed, submit a GitHub issue. On the right side of the
| AllocationFailed | The cluster or region doesn't have resources available or can't support the requested VM size. Retry the request at a later time, or request a different VM size. | [Provisioning and allocation issues for Linux](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-linux) <br><br> [Provisioning and allocation issues for Windows](/troubleshoot/azure/virtual-machines/troubleshoot-deployment-new-vm-windows) <br><br> [Troubleshoot allocation failures](/troubleshoot/azure/virtual-machines/allocation-failure)| | AnotherOperationInProgress | Wait for concurrent operation to complete. | | | AuthorizationFailed | Your account or service principal doesn't have sufficient access to complete the deployment. Check the role your account belongs to, and its access for the deployment scope.<br><br>You might receive this error when a required resource provider isn't registered. | [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md)<br><br>[Resolve registration](error-register-resource-provider.md) |
-| BadRequest | You sent deployment values that don't match what is expected by Resource Manager. Check the inner status message for help with troubleshooting. | [Template reference](/azure/templates/) <br><br> [Resource location in ARM template](../templates/resource-location.md) <br><br> [Resource location in Bicep file](../bicep/resource-declaration.md#location) |
+| BadRequest | You sent deployment values that don't match what is expected by Resource Manager. Check the inner status message for help with troubleshooting. <br><br> Validate the template's syntax to resolve deployment errors when using a template that was exported from an existing Azure resource. | [Template reference](/azure/templates/) <br><br> [Resource location in ARM template](../templates/resource-location.md) <br><br> [Resource location in Bicep file](../bicep/resource-declaration.md#location) <br><br> [Resolve invalid template](error-invalid-template.md)|
| Conflict | You're requesting an operation that isn't allowed in the resource's current state. For example, disk resizing is allowed only when creating a VM or when the VM is deallocated. | | | DeploymentActiveAndUneditable | Wait for concurrent deployment to this resource group to complete. | | | DeploymentFailedCleanUp | When you deploy in complete mode, any resources that aren't in the template are deleted. You get this error when you don't have adequate permissions to delete all of the resources not in the template. To avoid the error, change the deployment mode to incremental. | [Azure Resource Manager deployment modes](../templates/deployment-modes.md) |
azure-resource-manager Error Invalid Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-invalid-template.md
Title: Invalid template errors description: Describes how to resolve invalid template errors when deploying Bicep files or Azure Resource Manager templates (ARM templates). Previously updated : 01/03/2023 Last updated : 03/30/2023 # Resolve errors for invalid template
The same approach works for App Service apps. Consider moving configuration valu
1. webapp2 1. Configuration for webapp1 depends on webapp1 and webapp2. It contains app settings with values from webapp2. 1. Configuration for webapp2 depends on webapp1 and webapp2. It contains app settings with values from webapp1.+
+## Solution 6: Validate syntax for exported templates
+
+After you deploy resources in Azure, you can export the ARM template JSON and modify it for other deployments. You should validate the exported template for correct syntax _before_ you use it to deploy resources.
+
+You can export a template from the [portal](../templates/export-template-portal.md), [Azure CLI](../templates/export-template-cli.md), or [Azure PowerShell](../templates/export-template-powershell.md). There are recommendations whether you exported the template from the resource or resource group, or from deployment history.
+
+# [Bicep](#tab/bicep)
+
+After you export an ARM template, you can decompile the JSON template to Bicep. Then use best practices and the linter to validate your code.
+
+For more information, go to the following articles:
+
+- [Decompiling ARM template JSON to Bicep](../bicep/decompile.md)
+- [Best practices for Bicep](../bicep/best-practices.md)
+- [Add linter settings in the Bicep config file](../bicep/bicep-config-linter.md)
++
+# [JSON](#tab/json)
+
+After you export an ARM template, you can learn more about best practices and the toolkit for template validation:
+
+- [ARM template best practices](../templates/best-practices.md)
+- [ARM template test toolkit](../templates/test-toolkit.md)
++
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
Title: Use JavaScript to create a chat room with Azure Functions and SignalR Service
+ Title: Azure SignalR Service serverless quickstart - Javascript
description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using JavaScript. Previously updated : 04/04/2022 Last updated : 12/15/2022 ms.devlang: javascript
-# Quickstart: Use JavaScript to create an App showing GitHub star count with Azure Functions and SignalR Service
+# Quickstart: Create a serverless app with Azure Functions and SignalR Service using Javascript
In this article, you'll use Azure SignalR Service, Azure Functions, and JavaScript to build a serverless application to broadcast messages to clients. > [!NOTE]
-> You can get all code mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/javascript).
+> You can get all code used in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/javascript).
## Prerequisites -- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).-- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing), version 2 or above. Used to run Azure Function apps locally.-- [Node.js](https://nodejs.org/en/download/), version 10.x-
-The examples should work with other versions of Node.js, for more information, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
- This quickstart can be run on macOS, Windows, or Linux.
+| Prerequisite | Description |
+| | |
+| An Azure subscription |If you don't have a subscription, create an [Azure free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)|
+| A code editor | You'll need a code editor such as [Visual Studio Code](https://code.visualstudio.com/). |
+| [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing)| Requires version 2.7.1505 or higher to run Python Azure Function apps locally.|
+|[Node.js](https://nodejs.org/en/download/)| See supported node.js versions in the [Azure Functions JavaScript developer guide](../azure-functions/functions-reference-node.md#node-version).|
+| [Azurite](../storage/common/storage-use-azurite.md)| SignalR binding needs Azure Storage. You can use a local storage emulator when a function is running locally. |
+| [Azure CLI](/cli/azure/install-azure-cli)| Optionally, you can use the Azure CLI to create an Azure SignalR Service instance. |
+ ## Create an Azure SignalR Service instance [!INCLUDE [Create instance](includes/signalr-quickstart-create-instance.md)]
-## Setup and run the Azure Function locally
+## Setup function project
Make sure you have Azure Functions Core Tools installed.
-1. Using the command line, create an empty directory and then change to it. Initialize a new project:
+1. Open a command line.
+1. Create project directory and then change to it.
+1. Run the Azure Functions `func init` command to initialize a new project.
+
+ ```bash
+ # Initialize a function project
+ func init --worker-runtime javascript
+ ```
+
+## Create the project functions
+
+After you initialize a project, you need to create functions. This project requires three functions:
+
+- `index`: Hosts a web page for a client.
+- `negotiate`: Allows a client to get an access token.
+- `broadcast`: Uses a time trigger to periodically broadcast messages to all clients.
+
+When you run the `func new` command from the root directory of the project, the Azure Functions Core Tools creates the function source files storing them in a folder with the function name. You'll edit the files as necessary replacing the default code with the app code.
+
+### Create the index function
+
+1. Run the following command to create the `index` function.
+
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
+
+1. Edit *index/function.json* and replace the contents with the following json code:
+
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
+
+1. Edit *index/\__init\__.py* and replace the contents with the following code:
+
+ ```javascript
+ var fs = require('fs').promises
+
+ module.exports = async function (context, req) {
+ const path = context.executionContext.functionDirectory + '/../content/https://docsupdatetracker.net/index.html'
+ try {
+ var data = await fs.readFile(path);
+ context.res = {
+ headers: {
+ 'Content-Type': 'text/html'
+ },
+ body: data
+ }
+ context.done()
+ } catch (err) {
+ context.log.error(err);
+ context.done(err);
+ }
+ }
+ ```
+
+### Create the negotiate function
+
+1. Run the following command to create the `negotiate` function.
+
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
+
+1. Edit *negotiate/function.json* and replace the contents with the following json code:
+
+ ```json
+ {
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "post"
+ ],
+ "name": "req",
+ "route": "negotiate"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+
+### Create a broadcast function.
+
+1. Run the following command to create the `broadcast` function.
+
+ ```bash
+ func new -n broadcast -t TimerTrigger
+ ```
+
+1. Edit *broadcast/function.json* and replace the contents with the following code:
++
+ ```json
+ {
+ "bindings": [
+ {
+ "name": "myTimer",
+ "type": "timerTrigger",
+ "direction": "in",
+ "schedule": "*/5 * * * * *"
+ },
+ {
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+
+1. Edit *broadcast/index.js* and replace the contents with the following code:
+
+ ```javascript
+ var https = require('https');
+
+ var etag = '';
+ var star = 0;
+
+ module.exports = function (context) {
+ var req = https.request("https://api.github.com/repos/azure/azure-signalr", {
+ method: 'GET',
+ headers: {'User-Agent': 'serverless', 'If-None-Match': etag}
+ }, res => {
+ if (res.headers['etag']) {
+ etag = res.headers['etag']
+ }
+
+ var body = "";
+
+ res.on('data', data => {
+ body += data;
+ });
+ res.on("end", () => {
+ if (res.statusCode === 200) {
+ var jbody = JSON.parse(body);
+ star = jbody['stargazers_count'];
+ }
+
+ context.bindings.signalRMessages = [{
+ "target": "newMessage",
+ "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${star}` ]
+ }]
+ context.done();
+ });
+ }).on("error", (error) => {
+ context.log(error);
+ context.res = {
+ status: 500,
+ body: error
+ };
+ context.done();
+ });
+ req.end();
+ }
+ ```
+
+### Create the https://docsupdatetracker.net/index.html file
- ```bash
- # Initialize a function project
- func init --worker-runtime javascript
- ```
+The client interface for this app is a web page. The `index` function reads HTML content from the *content/https://docsupdatetracker.net/index.html* file.
+
+1. Create a folder called `content` in your project root folder.
+1. Create the file *content/https://docsupdatetracker.net/index.html*.
+1. Copy the following content to the *content/https://docsupdatetracker.net/index.html* file and save it:
-2. After you initialize a project, you need to create functions. In this sample, we'll create three functions:
-
- 1. Run the following command to create a `index` function, which will host a web page for clients.
-
- ```bash
- func new -n index -t HttpTrigger
- ```
-
- Open *index/function.json* and copy the following json code:
-
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- }
- ]
- }
- ```
-
- Open *index/index.js* and copy the following code:
-
- ```javascript
- var fs = require('fs').promises
-
- module.exports = async function (context, req) {
- const path = context.executionContext.functionDirectory + '/../content/https://docsupdatetracker.net/index.html'
- try {
- var data = await fs.readFile(path);
- context.res = {
- headers: {
- 'Content-Type': 'text/html'
- },
- body: data
- }
- context.done()
- } catch (err) {
- context.log.error(err);
- context.done(err);
- }
- }
- ```
-
- 2. Create a `negotiate` function for clients to get an access token.
-
- ```bash
- func new -n negotiate -t SignalRNegotiateHTTPTrigger
- ```
-
- Open *negotiate/function.json* and copy the following json code:
-
- ```json
- {
- "disabled": false,
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "methods": [
- "post"
- ],
- "name": "req",
- "route": "negotiate"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "serverless",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "in"
- }
- ]
- }
- ```
-
- 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use a time trigger to broadcast messages periodically.
-
- ```bash
- func new -n broadcast -t TimerTrigger
- ```
-
- Open *broadcast/function.json* and copy the following code:
-
- ```json
- {
- "bindings": [
- {
- "name": "myTimer",
- "type": "timerTrigger",
- "direction": "in",
- "schedule": "*/5 * * * * *"
- },
- {
- "type": "signalR",
- "name": "signalRMessages",
- "hubName": "serverless",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "out"
- }
- ]
- }
- ```
-
- Open *broadcast/index.js* and copy the following code:
-
- ```javascript
- var https = require('https');
-
- var etag = '';
- var star = 0;
-
- module.exports = function (context) {
- var req = https.request("https://api.github.com/repos/azure/azure-signalr", {
- method: 'GET',
- headers: {'User-Agent': 'serverless', 'If-None-Match': etag}
- }, res => {
- if (res.headers['etag']) {
- etag = res.headers['etag']
- }
-
- var body = "";
-
- res.on('data', data => {
- body += data;
- });
- res.on("end", () => {
- if (res.statusCode === 200) {
- var jbody = JSON.parse(body);
- star = jbody['stargazers_count'];
- }
-
- context.bindings.signalRMessages = [{
- "target": "newMessage",
- "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${star}` ]
- }]
- context.done();
- });
- }).on("error", (error) => {
- context.log(error);
- context.res = {
- status: 500,
- body: error
- };
- context.done();
- });
- req.end();
- }
- ```
-
-3. The client interface of this sample is a web page. We read HTML content from *content/https://docsupdatetracker.net/index.html* in the `index` function, create a new file named *https://docsupdatetracker.net/index.html* in the `content` directory under your project root folder. Copy the following code:
```html <html>
Make sure you have Azure Functions Core Tools installed.
</html> ``` +
+### Add the SignalR Service connection string to the function app settings
+=======
1. Azure Functions requires a storage account to work. You can install and run the [Azure Storage Emulator](../storage/common/storage-use-azurite.md). **Or** you can update the setting to use your real storage account with the following command: ```bash func settings add AzureWebJobsStorage "<storage-connection-string>"
Make sure you have Azure Functions Core Tools installed.
4. You're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
- 1. In the Azure portal, find the SignalR instance you deployed earlier by typing its name in the **Search** box. Select the instance to open it.
- ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
+The last step is to set the SignalR Service connection string in Azure Function app settings.
- 1. Select **Keys** to view the connection strings for the SignalR Service instance.
+1. In the Azure portal, go to the SignalR instance you deployed earlier.
+1. Select **Keys** to view the connection strings for the SignalR Service instance.
- ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
+ :::image type="content" source="media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png" alt-text="Screenshot of Azure SignalR service Keys page.":::
- 1. Copy the primary connection string. And execute the command below.
+1. Copy the primary connection string, and execute the command:
- ```bash
- func settings add AzureSignalRConnectionString "<signalr-connection-string>"
- ```
+ ```bash
+ func settings add AzureSignalRConnectionString "<signalr-connection-string>"
+ ```
+
+### Run the Azure Function app locally
-5. Run the Azure function in local host:
- ```bash
- func start
- ```
+Start the Azurite storage emulator:
- After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current star count. And if you star or "unstar" in GitHub, you'll see the star count refreshing every few seconds.
+ ```bash
+ azurite
+ ```
+
+Run the Azure Function app in the local environment:
+
+ ```bash
+ func start
+ ```
+
+> [!NOTE]
+> If you see an errors showing read errors on the blob storage, ensure the 'AzureWebJobsStorage' setting in the *local.settings.json* file is set to `UseDevelopmentStorage=true`.
+After the Azure Function is running locally, go to `http://localhost:7071/api/index`. The page displays the current star count for the GitHub Azure/azure-signalr repository. When you star or unstar the repository in GitHub, you'll see the refreshed count every few seconds.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp)
azure-signalr Signalr Quickstart Azure Functions Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-python.md
Title: Azure SignalR Service serverless quickstart - Python
description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using Python. Previously updated : 04/19/2022 Last updated : 12/15/2022 ms.devlang: python
-# Quickstart: Create a serverless app with Azure Functions, SignalR Service, and Python
+# Quickstart: Create a serverless app with Azure Functions and Azure SignalR Service in Python
Get started with Azure SignalR Service by using Azure Functions and Python to build a serverless application that broadcasts messages to clients. You'll run the function in the local environment, connecting to an Azure SignalR Service instance in the cloud. Completing this quickstart incurs a small cost of a few USD cents or less in your Azure Account.
Get started with Azure SignalR Service by using Azure Functions and Python to bu
## Prerequisites
-This quickstart can be run on macOS, Windows, or Linux.
+This quickstart can be run on macOS, Windows, or Linux. You will need the following:
-- You'll need a code editor such as [Visual Studio Code](https://code.visualstudio.com/).--- Install the [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (version 2.7.1505 or higher) to run Python Azure Function apps locally.--- Azure Functions requires [Python 3.6+](https://www.python.org/downloads/). (See [Supported Python versions](../azure-functions/functions-reference-python.md#python-version).)--- SignalR binding needs Azure Storage, but you can use a local storage emulator when a function is running locally. You'll need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).-
+| Prerequisite | Description |
+| | |
+| An Azure subscription |If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)|
+| A code editor | You'll need a code editor such as [Visual Studio Code](https://code.visualstudio.com/). |
+| [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing)| Requires version 2.7.1505 or higher to run Python Azure Function apps locally.|
+| [Python 3.6+](https://www.python.org/downloads/)| Azure Functions requires Python 3.6+. See [Supported Python versions](../azure-functions/functions-reference-python.md#python-version). |
+| [Azurite](../storage/common/storage-use-azurite.md)| SignalR binding needs Azure Storage. You can use a local storage emulator when a function is running locally. |
+| [Azure CLI](/cli/azure/install-azure-cli)| Optionally, you can use the Azure CLI to create an Azure SignalR Service instance. |
## Create an Azure SignalR Service instance [!INCLUDE [Create instance](includes/signalr-quickstart-create-instance.md)]
-## Setup and run the Azure Function locally
-
-1. Create an empty directory and go to the directory with command line.
-
- ```bash
- # Initialize a function project
- func init --worker-runtime python
- ```
-
-2. After you initialize a project, you need to create functions. In this sample, we need to create three functions: `index`, `negotiate`, and `broadcast`.
-
- 1. Run the following command to create an `index` function, which will host a web page for a client.
-
- ```bash
- func new -n index -t HttpTrigger
- ```
-
- Open *index/function.json* and copy the following json code:
-
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "$return"
- }
- ]
- }
- ```
-
- Open *index/\__init\__.py* and copy the following code:
-
- ```javascript
- import os
-
- import azure.functions as func
+## Create the Azure Function project
- def main(req: func.HttpRequest) -> func.HttpResponse:
- f = open(os.path.dirname(os.path.realpath(__file__)) + '/../content/https://docsupdatetracker.net/index.html')
- return func.HttpResponse(f.read(), mimetype='text/html')
- ```
+Create a local Azure Function project.
- 2. Create a `negotiate` function for clients to get access token.
+1. From a command line, create a directory for your project.
+1. Change to the project directory.
+1. Use the Azure Functions `func init` command to initialize your function project.
- ```bash
- func new -n negotiate -t HttpTrigger
- ```
+ ```bash
+ # Initialize a function project
+ func init --worker-runtime python
+ ```
- Open *negotiate/function.json* and copy the following json code:
-
- ```json
- {
- "scriptFile": "__init__.py",
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "post"
- ]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "$return"
- },
- {
- "type": "signalRConnectionInfo",
- "name": "connectionInfo",
- "hubName": "serverless",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "in"
- }
- ]
- }
- ```
+## Create the functions
- Open *negotiate/\__init\__.py* and copy the following code:
+After you initialize a project, you need to create functions. This project requires three functions:
- ```python
- import azure.functions as func
+- `index`: Hosts a web page for a client.
+- `negotiate`: Allows a client to get an access token.
+- `broadcast`: Uses a time trigger to periodically broadcast messages to all clients.
+When you run the `func new` command from the root directory of the project, the Azure Functions Core Tools creates default function source files and stores them in a folder named after the function. You'll edit the files as necessary replacing the default code with the app code.
- def main(req: func.HttpRequest, connectionInfo) -> func.HttpResponse:
- return func.HttpResponse(connectionInfo)
- ```
+### Create the index function
- 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use time trigger to broadcast messages periodically.
+You can use this sample function as a template for your own functions.
- ```bash
- func new -n broadcast -t TimerTrigger
- # install requests
- pip install requests
- ```
+1. Run the following command to create the `index` function.
- Open *broadcast/function.json* and copy the following code:
-
- ```json
- {
- "scriptFile": "__init__.py",
- "bindings": [
- {
- "name": "myTimer",
- "type": "timerTrigger",
- "direction": "in",
- "schedule": "*/5 * * * * *"
- },
- {
- "type": "signalR",
- "name": "signalRMessages",
- "hubName": "serverless",
- "connectionStringSetting": "AzureSignalRConnectionString",
- "direction": "out"
- }
- ]
- }
- ```
-
- Open *broadcast/\__init\__.py* and copy the following code:
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
- ```python
- import requests
- import json
+1. Edit *index/function.json* and replace the contents with the following json code:
- import azure.functions as func
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$result"
+ }
+ ]
+ }
+ ```
- etag = ''
- start_count = 0
+1. Edit *index/\__init\__.py* and replace the contents with the following code:
- def main(myTimer: func.TimerRequest, signalRMessages: func.Out[str]) -> None:
- global etag
- global start_count
- headers = {'User-Agent': 'serverless', 'If-None-Match': etag}
- res = requests.get('https://api.github.com/repos/azure/azure-signalr', headers=headers)
- if res.headers.get('ETag'):
- etag = res.headers.get('ETag')
+ ```javascript
+ import os
- if res.status_code == 200:
- jres = res.json()
- start_count = jres['stargazers_count']
-
- signalRMessages.set(json.dumps({
- 'target': 'newMessage',
- 'arguments': [ 'Current star count of https://github.com/Azure/azure-signalr is: ' + str(start_count) ]
- }))
+ import azure.functions as func
+
+ def main(req: func.HttpRequest) -> func.HttpResponse:
+ f = open(os.path.dirname(os.path.realpath(__file__)) + '/../content/https://docsupdatetracker.net/index.html')
+ return func.HttpResponse(f.read(), mimetype='text/html')
```
-3. The client interface of this sample is a web page. We read HTML content from *content/https://docsupdatetracker.net/index.html* in the `index` function, and then create a new file *https://docsupdatetracker.net/index.html* in the `content` directory under your project root folder. Copy the following content:
-
- ```html
- <html>
-
- <body>
- <h1>Azure SignalR Serverless Sample</h1>
- <div id="messages"></div>
- <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js"></script>
- <script>
- let messages = document.querySelector('#messages');
- const apiBaseUrl = window.location.origin;
- const connection = new signalR.HubConnectionBuilder()
- .withUrl(apiBaseUrl + '/api')
- .configureLogging(signalR.LogLevel.Information)
- .build();
- connection.on('newMessage', (message) => {
- document.getElementById("messages").innerHTML = message;
- });
-
- connection.start()
- .catch(console.error);
- </script>
- </body>
-
- </html>
- ```
+### Create the negotiate function
+
+1. Run the following command to create the `negotiate` function.
+
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
+
+1. Edit *negotiate/function.json* and replace the contents with the following json code:
+
+ ```json
+ {
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ },
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+
+1. Edit *negotiate/\__init\__.py* and replace the contents with the following code:
+
+ ```python
+ import azure.functions as func
+
+
+ def main(req: func.HttpRequest, connectionInfo) -> func.HttpResponse:
+ return func.HttpResponse(connectionInfo)
+ ```
+
+### Create a broadcast function.
+
+1. Run the following command to create the `broadcast` function.
+
+ ```bash
+ func new -n broadcast -t TimerTrigger
+ # install requests
+ pip install requests
+ ```
+
+1. Edit *broadcast/function.json* and replace the contents with the following code:
+
+ ```json
+ {
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "name": "myTimer",
+ "type": "timerTrigger",
+ "direction": "in",
+ "schedule": "*/5 * * * * *"
+ },
+ {
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+
+1. Edit *broadcast/\__init\__.py* and replace the contents with the following code:
+
+ ```python
+ import requests
+ import json
+
+ import azure.functions as func
+
+ etag = ''
+ start_count = 0
+
+ def main(myTimer: func.TimerRequest, signalRMessages: func.Out[str]) -> None:
+ global etag
+ global start_count
+ headers = {'User-Agent': 'serverless', 'If-None-Match': etag}
+ res = requests.get('https://api.github.com/repos/azure/azure-signalr', headers=headers)
+ if res.headers.get('ETag'):
+ etag = res.headers.get('ETag')
+
+ if res.status_code == 200:
+ jres = res.json()
+ start_count = jres['stargazers_count']
+
+ signalRMessages.set(json.dumps({
+ 'target': 'newMessage',
+ 'arguments': [ 'Current star count of https://github.com/Azure/azure-signalr is: ' + str(start_count) ]
+ }))
+ ```
+
+### Create the https://docsupdatetracker.net/index.html file
+
+The client interface for this app is a web page. The `index` function reads HTML content from the *content/https://docsupdatetracker.net/index.html* file.
+
+1. Create a folder called `content` in your project root folder.
+1. Create the file *content/https://docsupdatetracker.net/index.html*.
+1. Copy the following content to the *content/https://docsupdatetracker.net/index.html* file and save it:
+
+ ```html
+ <html>
+
+ <body>
+ <h1>Azure SignalR Serverless Sample</h1>
+ <div id="messages"></div>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js"></script>
+ <script>
+ let messages = document.querySelector('#messages');
+ const apiBaseUrl = window.location.origin;
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl(apiBaseUrl + '/api')
+ .configureLogging(signalR.LogLevel.Information)
+ .build();
+ connection.on('newMessage', (message) => {
+ document.getElementById("messages").innerHTML = message;
+ });
+
+ connection.start()
+ .catch(console.error);
+ </script>
+ </body>
+
+ </html>
+ ```
+
+### Add the SignalR Service connection string to the function app settings
+
+The last step is to set the SignalR Service connection string in Azure Function app settings.
+
+1. In the Azure portal, go to the SignalR instance you deployed earlier.
+1. Select **Keys** to view the connection strings for the SignalR Service instance.
+
+ :::image type="content" source="media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png" alt-text="Screenshot of Azure SignalR service Keys page.":::
+
+1. Copy the primary connection string, and execute the command:
-1. Azure Functions requires a storage account to work. You can install and run the [Azure Storage Emulator](../storage/common/storage-use-azurite.md). **Or** you can update the setting to use your real storage account with the following command:
```bash
- func settings add AzureWebJobsStorage "<storage-connection-string>"
+ func settings add AzureSignalRConnectionString "<signalr-connection-string>"
```
-4. We're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
+### Run the Azure Function app locally
- 1. In the Azure portal, search for the SignalR Service instance you deployed earlier. Select the instance to open it.
+Start the Azurite storage emulator:
- ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
+ ```bash
+ azurite
+ ```
- 2. Select **Keys** to view the connection strings for the SignalR Service instance.
+Run the Azure Function app in the local environment:
+
+ ```bash
+ func start
+ ```
- ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
-
- 3. Copy the primary connection string, and then run the following command:
-
- ```bash
- func settings add AzureSignalRConnectionString "<signalr-connection-string>"
- ```
-
-5. Run the Azure Function in the local environment:
-
- ```bash
- func start
- ```
+> [!NOTE]
+> If you see an errors showing read errors on the blob storage, ensure the 'AzureWebJobsStorage' setting in the *local.settings.json* file is set to `UseDevelopmentStorage=true`.
- After the Azure Function is running locally, go to `http://localhost:7071/api/index` and you'll see the current star count. If you star or unstar in GitHub, you'll get a refreshed star count every few seconds.
+After the Azure Function is running locally, go to `http://localhost:7071/api/index`. The page displays the current star count for the GitHub Azure/azure-signalr repository. When you star or unstar the repository in GitHub, you'll see the refreshed count every few seconds.
[!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
azure-video-indexer Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-detection.md
To see face detection insight in the JSON file, do the following:
To download the JSON file via the API, [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+> [!IMPORTANT]
+> When reviewing face detections in the UI you may not see all faces, we expose only face groups with a confidence of more than 0.5 and the face must appear for a minimum of 4 seconds or 10% * video_duration. Only when these conditions are met we will show the face in the UI and the Insights.json. You can always retrieve all face instances from the Face Artifact file using the api `https://api.videoindexer.ai/{location}/Accounts/{accountId}/Videos/{videoId}/ArtifactUrl[?Faces][&accessToken]`
+ ## Face detection components During the Faces Detection procedure, images in a media file are processed, as follows:
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Before you begin the prerequisites, review the [Performance best practices](#per
`az feature register --name "ANFAvsDataStore" --namespace "Microsoft.NetApp"` `az feature show --name "ANFAvsDataStore" --namespace "Microsoft.NetApp" --query properties.state`
- 1. Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For optimal performance, it's recommended to use the Ultra tier. Select option **Azure VMware Solution Datastore** listed under the **Protocol** section.
+ 1. Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. Select option **Azure VMware Solution Datastore** listed under the **Protocol** section.
1. Create a volume with **Standard** [network features](../azure-netapp-files/configure-network-features.md) if available for ExpressRoute FastPath connectivity. 1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud. 1. If you're using [export policies](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) to control, access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
azure-vmware Configure Dns Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-dns-azure-vmware-solution.md
The diagram shows that the NSX-T Data Center DNS Service can forward DNS queries
1. Repeat the above steps for other FQDN zones, including any applicable reverse lookup zones.
+## Change Default T1 DNS Forwarder Zone
+ 1. In your Azure VMware Solution private cloud, under **Workload Networking**, select **DNS** > **DNS zones** > Check **TNT##-DNS-FORWARDER-ZONE**. Then select **Edit**.
+
+ ![AVS-DNS](https://user-images.githubusercontent.com/7501186/226980095-b0576824-e1b7-46dc-b726-58670e4e3096.png)
+
+
+ 2. Change DNS server entries to valid reachable IP addresses. Then select **OK**
+
+ ![Edit_DNS_Zone](https://user-images.githubusercontent.com/7501186/226980023-8b92fce9-310e-4934-9045-238bcd5d921f.png)
+
+ >[!IMPORTANT]
+ >A DNS endpoint that is unreachable by the NSX-T DNS server will result in an NSX-T alarm stating that the endpoint is unreachable. In cases of the default configuration provided with AVS, this is due to internet that is disabled by default. The alarm can be acknowledged and ignored, or the default configuration above can be changed to a valid endpoint.
## Verify name resolution operations
azure-web-pubsub Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-custom-domain.md
description: How to configure a custom domain for Azure Web PubSub Service - Previously updated : 07/07/2022+ Last updated : 03/30/2023 # Configure a custom domain for Azure Web PubSub Service
-In addition to the default domain provided Azure Web PubSub Service, you can also add custom domains.
+In addition to the default domain provided by the Azure Web PubSub Service, you can also add a custom domain. A custom domain is a domain name that you own and manage. You can use a custom domain to access your Azure Web PubSub Service resource. For example, you can use `contoso.example.com` instead of `contoso.webpubsub.azure.com` to access your Azure Web PubSub Service resource.
## Prerequisites
-* Resource must be Premium tier
-* A custom certificate matching custom domain is stored in Azure Key Vault
+* An Azure account with an active subscription. If you don't have an Azure account, you can [create an account for free](https://azure.microsoft.com/free/).
+* An Azure Web PubSub service (must be Premium tier).
+* An Azure Key Vault resource.
+* A custom certificate matching custom domain that is stored in Azure Key Vault.
## Add a custom certificate
-Before you can add a custom domain, you need add a matching custom certificate first. A custom certificate is a sub resource of your Azure Web PubSub Service. It references a certificate in your Azure Key Vault. For security and compliance reasons, Azure Web PubSub Service doesn't permanently store your certificate. Instead it fetches it from your Key Vault on the fly and keeps it in memory.
+Before you can add a custom domain, you need to add a matching custom certificate first. A custom certificate is a resource of your Azure Web PubSub Service. It references a certificate in your Azure Key Vault. For security and compliance reasons, Azure Web PubSub Service doesn't permanently store your certificate. Instead it fetches it from your Key Vault on the fly and keeps it in memory.
### Step 1: Grant your Azure Web PubSub Service resource access to Key Vault
Azure Web PubSub Service uses Managed Identity to access your Key Vault. In orde
1. In the Azure portal, go to your Azure Web PubSub Service resource. 1. In the menu pane, select **Identity**.
-1. Turn on either **System assigned** or **User assigned** identity. Click **Save**.
+1. You can select **System assigned** or **User assigned** identity. If you want to use **User assigned** identity, you need to create one first.
+ 1. To add a System assigned identity
+ 1. Select **On**.
+ 1. Select **Yes** to confirm.
+ 1. Select **Save**.
- :::image type="content" alt-text="Screenshot of enabling managed identity." source="media\howto-custom-domain\portal-identity.png" :::
+ :::image type="content" alt-text="Screenshot of enabling system assigned managed identity." source="media\howto-custom-domain\portal-identity.png" :::
+
+ 1. To add a User assigned identity;
+ 1. Select **Add user assigned managed identity**.
+ 1. Select an existing identity.
+ 1. Select **Add**.
+
+ :::image type="content" alt-text="Screenshot of enabling user assigned managed identity." source="media\howto-custom-domain\portal-user-assigned-identity.png" :::
+
+1. Select **Save**.
Depending on how you configure your Key Vault permission model, you may need to grant permissions at different places.
If you're using Key Vault built-in access policy as Key Vault permission model:
:::image type="content" alt-text="Screenshot of built-in access policy selected as Key Vault permission model." source="media\howto-custom-domain\portal-key-vault-perm-model-access-policy.png" ::: 1. Go to your Key Vault resource.
-1. In the menu pane, select **Access configuration**. Click **Go to access policies**.
-1. Click **Create**. Select **Secret Get** permission and **Certificate Get** permission. Click **Next**.
+1. In the menu pane, select **Access configuration**.
+1. Select **Vault access policy**.
+1. Select **Go to access policies**.
+1. Select **Create**.
+1. Select **Secret Get** permission.
+1. Select **Certificate Get** permission.
+1. Select **Next**.
:::image type="content" alt-text="Screenshot of permissions selection in Key Vault." source="media\howto-custom-domain\portal-key-vault-permissions.png" :::
-1. Search for the Azure Web PubSub Service resource name or the user assigned identity name. Click **Next**.
+1. Search for the Azure Web PubSub Service resource name.
+1. Select **Next**.
:::image type="content" alt-text="Screenshot of principal selection in Key Vault." source="media\howto-custom-domain\portal-key-vault-principal.png" :::
-1. Skip **Application (optional)**. Click **Next**.
-1. In **Review + create**, click **Create**.
+1. Select **Next** on the **Application** tab.
+1. Select **Create**.
#### [Azure role-based access control](#tab/azure-rbac)
If you're using Azure role-based access control as Key Vault permission model:
:::image type="content" alt-text="Screenshot of Azure RBAC selected as Key Vault permission model." source="media\howto-custom-domain\portal-key-vault-perm-model-rbac.png" ::: 1. Go to your Key Vault resource.
-1. In the menu pane, select **Access control (IAM)**.
-1. Click **Add**. Select **Add role assignment**.
+1. Select **Go to access control (IAM)** from the menu.
+1. Select **Add**, then select **Add role assignment** fro the drop-down.
:::image type="content" alt-text="Screenshot of Key Vault IAM." source="media\howto-custom-domain\portal-key-vault-iam.png" :::
-1. Under the **Role** tab, select **Key Vault Secrets User**. Click **Next**.
+1. Under the **Role** tab, select **Key Vault Secrets User**. Select **Next**.
:::image type="content" alt-text="Screenshot of role tab when adding role assignment to Key Vault." source="media\howto-custom-domain\portal-key-vault-role.png" :::
-1. Under the **Members** tab, select **Managed identity**. 1. Search for the Azure Web PubSub Service resource name or the user assigned identity name. Click **Next**.
+1. Under the **Members** tab, select **Managed identity**.
+1. Search for and **Select** the Azure Web PubSub Service resource name or the user assigned identity name.
:::image type="content" alt-text="Screenshot of members tab when adding role assignment to Key Vault." source="media\howto-custom-domain\portal-key-vault-members.png" :::
-1. Click **Review + assign**.
+1. Select **Next**.
+1. Select **Review + assign**.
--
If you're using Azure role-based access control as Key Vault permission model:
1. In the Azure portal, go to your Azure Web PubSub Service resource. 1. In the menu pane, select **Custom domain**.
-1. Under **Custom certificate**, click **Add**.
+1. In the **Custom certificate** section, select **Add**.
:::image type="content" alt-text="Screenshot of custom certificate management." source="media\howto-custom-domain\portal-custom-certificate-management.png" ::: 1. Fill in a name for the custom certificate.
-1. Click **Select from your Key Vault** to choose a Key Vault certificate. After selection the following **Key Vault Base URI**, **Key Vault Secret Name** should be automatically filled. Alternatively you can also fill in these fields manually.
+1. Select **Select from your Key Vault** to choose a Key Vault certificate. After selection the following **Key Vault Base URI**, the **Key Vault Secret Name** will be automatically filled in. Alternatively you can also fill in these fields manually.
1. Optionally, you can specify a **Key Vault Secret Version** if you want to pin the certificate to a specific version.
-1. Click **Add**.
+1. Select **Add**.
:::image type="content" alt-text="Screenshot of adding a custom certificate." source="media\howto-custom-domain\portal-custom-certificate-add.png" :::
-Azure Web PubSub Service will then fetch the certificate and validate its content. If everything is good, the **Provisioning State** will be **Succeeded**.
+Azure Web PubSub Service fetches the certificate and validates its contents. When it succeeds, the certificate's **Provisioning State** will be **Succeeded**.
:::image type="content" alt-text="Screenshot of an added custom certificate." source="media\howto-custom-domain\portal-custom-certificate-added.png" :::
To validate the ownership of your custom domain, you need to create a CNAME reco
For example, if your default domain is `contoso.webpubsub.azure.com`, and your custom domain is `contoso.example.com`, you need to create a CNAME record on `example.com` like:
-```
+```plaintext
contoso.example.com. 0 IN CNAME contoso.webpubsub.azure.com. ```
-If you're using Azure DNS Zone, see [manage DNS records](../dns/dns-operations-recordsets-portal.md) for how to add a CNAME record.
+If you're using Azure DNS Zone, see [manage DNS records](../dns/dns-operations-recordsets-portal.md) to learn how to add a CNAME record.
:::image type="content" alt-text="Screenshot of adding a CNAME record in Azure DNS Zone." source="media\howto-custom-domain\portal-dns-cname.png" :::
A custom domain is another sub resource of your Azure Web PubSub Service. It con
1. In the Azure portal, go to your Azure Web PubSub Service resource. 1. In the menu pane, select **Custom domain**.
-1. Under **Custom domain**, click **Add**.
+1. Under **Custom domain**, select **Add**.
:::image type="content" alt-text="Screenshot of custom domain management." source="media\howto-custom-domain\portal-custom-domain-management.png" :::
-1. Fill in a name for the custom domain. It's the sub resource name.
-1. Fill in the domain name. It's the full domain name of your custom domain, for example, `contoso.com`.
+1. Enter a name for the custom domain. It's the sub resource name.
+1. Enter the domain name. It's the full domain name of your custom domain, for example, `contoso.com`.
1. Select a custom certificate that applies to this custom domain.
-1. Click **Add**.
+1. Select **Add**.
:::image type="content" alt-text="Screenshot of adding a custom domain." source="media\howto-custom-domain\portal-custom-domain-add.png" :::
$ curl -vvv https://contoso.example.com/api/health
--
-It should return `200` status code without any certificate error.
+The health API should return `200` status code without any certificate error.
## Key Vault in private network
-If you have configured [Private Endpoint](../private-link/private-endpoint-overview.md) to your Key Vault, Azure Web PubSub Service cannot access the Key Vault via public network. You need to set up a [Shared Private Endpoint](./howto-secure-shared-private-endpoints-key-vault.md) to let Azure Web PubSub Service access your Key Vault via private network.
+If you've configured a [Private Endpoint](../private-link/private-endpoint-overview.md) to your Key Vault, Azure Web PubSub Service can't access the Key Vault via public network. You need to set up a [shared private endpoint](./howto-secure-shared-private-endpoints-key-vault.md) to let Azure Web PubSub Service access your Key Vault via private network.
-After you create a Shared Private Endpoint, you can create a custom certificate as usual. **You don't have to change the domain in Key Vault URI**. For example, if your Key Vault base URI is `https://contoso.vault.azure.net`, you still use this URI to configure custom certificate.
+After you create a shared private endpoint, you can create a custom certificate as usual. **You don't have to change the domain in Key Vault URI**. For example, if your Key Vault base URI is `https://contoso.vault.azure.net`, you still use this URI to configure custom certificate.
You don't have to explicitly allow Azure Web PubSub Service IPs in Key Vault firewall settings. For more info, see [Key Vault private link diagnostics](../key-vault/general/private-link-diagnostics.md). ## Next steps
-+ [How to enable managed identity for Azure Web PubSub Service](howto-use-managed-identity.md)
-+ [Get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)
-+ [What is Azure DNS](../dns/dns-overview.md)
+* [How to enable managed identity for Azure Web PubSub Service](howto-use-managed-identity.md)
+* [Get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)
+* [What is Azure DNS](../dns/dns-overview.md)
azure-web-pubsub Howto Secure Shared Private Endpoints Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-secure-shared-private-endpoints-key-vault.md
description: How to access key vault in private network through Shared Private Endpoints - Previously updated : 01/03/2023+ Last updated : 03/27/2023
-# Access Key Vault in private network through Shared Private Endpoints
+# Access Key Vault in private network through shared private endpoints
-Azure Web PubSub Service can access your Key Vault in private network through Shared Private Endpoints. In this way you don't have to expose your Key Vault on public network.
+Azure Web PubSub Service can access your Key Vault in a private network through shared private endpoints connections. This article shows you how to configure your Web PubSub service instance to route outbound calls to a key vault through a shared private endpoint rather than public network.
:::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-secure-shared-private-endpoints-key-vault\shared-private-endpoint-overview.png" :::
-## Shared Private Link Resources Management
-
-Private endpoints of secured resources that are created through Azure Web PubSub Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Key Vault, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure Web PubSub Service execution environment and aren't directly visible to you.
+Private endpoints of secured resources created through Azure Web PubSub Service APIs are referred to as *shared private-link resources*. This is because you're "sharing" access to a resource, such as an Azure Key Vault, that has been integrated with the [Azure Private Link service](../private-link/private-link-overview.md). These private endpoints are created inside the Azure Web PubSub Service execution environment and aren't directly visible to you.
> [!NOTE]
-> The examples in this article are based on the following assumptions:
+> The examples in this article use the following resource IDs:
+>
> * The resource ID of this Azure Web PubSub Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub .
-> * The resource ID of Azure Key Vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_.
+> * The resource ID of Azure Key Vault is */subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv*.
+>
+> When following the steps, substitute the resource IDs of your Azure Web PubSub Service and Azure Key Vault.
+
+## Prerequisites
-The rest of the examples show how the *contoso-webpubsub* service can be configured so that its outbound calls to Key Vault go through a private endpoint rather than public network.
+* An Azure subscription, if you don't have one, create a [free account].(<https://azure.microsoft.com/free/?WT.mc_id=A261C142F>).
+* [Azure CLI](/cli/azure/install-azure-cli) 2.25.0 or later (if using Azure CLI)._
+* An Azure Web PubSub Service instance in a **Standard** pricing tier or higher
+* An Azure Key Vault resource.
-### Step 1: Create a shared private link resource to the Key Vault
+### 1. Create a shared private endpoint resource to the Key Vault
#### [Azure portal](#tab/azure-portal)
-1. In the Azure portal, go to your Azure Web PubSub Service resource.
-1. In the menu pane, select **Networking**. Switch to **Private access** tab.
-1. Click **Add shared private endpoint**.
+1. In the Azure portal, go to your Azure Web PubSub Service resource page.
+1. Select **Networking** from the menu.
+1. Select the **Private access** tab.
+1. Select **Add shared private endpoint**.
:::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" :::
-1. Fill in a name for the shared private endpoint.
-1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID.
-1. Click **Add**.
+1. Enter a **Name** for the shared private endpoint.
+1. Enter your key vault resource by choosing **Select from your resources** and selecting your resource from the lists, or by choosing **Specify resource ID** and entering your key vault resource ID.
+1. Enter *please approve* for the **Request message**.
+1. Select **Add**.
:::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-add.png" :::
-1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
+The shared private endpoint resource provisioning state is **Succeeded**. The connection state is **Pending** approval at target resource side.
:::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" lightbox="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" ::: #### [Azure CLI](#tab/azure-cli)
-You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
+You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource. Replace the `uri` with your own value.
-```dotnetcli
+```azurecli
az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub/sharedPrivateLinkResources/kv-pe?api-version=2022-08-01-preview --body @create-pe.json ```
-The contents of the *create-pe.json* file, which represent the request body to the API, are as follows:
+The contents of the *create-pe.json* file, which represents the request body to the API, are as follows:
```json {
The contents of the *create-pe.json* file, which represent the request body to t
} ```
-The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following:
+The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following output:
-```plaintext
+```output
"Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2022-08-01-preview" ```
-You can poll this URI periodically to obtain the status of the operation.
+You can poll this URI periodically to obtain the status of the operation. Wait for the status to change to "Succeeded" before proceeding to the next steps.
-If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+You can poll for the status by manually querying the `Azure-AsyncOperationHeader` value:
-```dotnetcli
+```azurecli
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2022-08-01-preview ```
-Wait until the status changes to "Succeeded" before proceeding to the next steps.
- --
-### Step 2a: Approve the private endpoint connection for the Key Vault
+### 2. Approve the private endpoint connection for the Key Vault
+
+After the private endpoint connection has been created, you need to approve the connection request from the Azure Web PubSub Service in your key vault resource.
#### [Azure portal](#tab/azure-portal)
-1. In the Azure portal, select the **Networking** tab of your Key Vault and navigate to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+1. In the Azure portal, go to your key vault resource page.
+1. Select **Networking** from the menu.
+1. Select **Private endpoint connections**.
:::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints-key-vault\portal-key-vault-approve-private-endpoint.png" :::
-1. Select the private endpoint that Azure Web PubSub Service created. Click **Approve**.
-
- Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+1. Select the private endpoint that Azure Web PubSub Service created.
+1. Select **Approve** and **Yes** to confirm.
+1. Wait for the private endpoint connection to be approved.
:::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints-key-vault\portal-key-vault-approved-private-endpoint.png" :::
Wait until the status changes to "Succeeded" before proceeding to the next steps
1. List private endpoint connections.
- ```dotnetcli
- az network private-endpoint-connection list -n <key-vault-resource-name> -g <key-vault-resource-group-name> --type 'Microsoft.KeyVault/vaults'
+ ```azurecli
+ az network private-endpoint-connection list --name <key-vault-resource-name> --resource-group <key-vault-resource-group-name> --type 'Microsoft.KeyVault/vaults'
```
- There should be a pending private endpoint connection. Note down its ID.
+ There should be a pending private endpoint connection. Note its `id`.
```json [
Wait until the status changes to "Succeeded" before proceeding to the next steps
1. Approve the private endpoint connection.
- ```dotnetcli
+ ```azurecli
az network private-endpoint-connection approve --id <private-endpoint-connection-id> ``` --
-### Step 2b: Query the status of the shared private link resource
+### 3. Query the status of the shared private link resource
-It takes minutes for the approval to be propagated to Azure Web PubSub Service. You can check the state using either Azure portal or Azure CLI.
+It takes a few minutes for the approval to be propagated to Azure Web PubSub Service. You can check the state using either Azure portal or Azure CLI. The shared private endpoint between Azure Web PubSub Service and Azure Key Vault is active when the container state is approved.
#### [Azure portal](#tab/azure-portal)
+1. Go to the Azure Web PubSub Service resource in the Azure portal.
+1. Select **Networking** from the menu.
+1. Select **Shared private link resources**.
+ :::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" lightbox="media\howto-secure-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" ::: #### [Azure CLI](#tab/azure-cli)
-```dotnetcli
+```azurecli
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webpubsub/contoso-webpubsub/sharedPrivateLinkResources/func-pe?api-version=2022-08-01-preview ```
-This would return a JSON, where the connection state would show up as "status" under the "properties" section.
+This command would return a JSON, where the connection state would show up as "status" under the "properties" section.
```json {
This would return a JSON, where the connection state would show up as "status" u
```
-If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure Web PubSub Service can communicate over the private endpoint.
+When the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, the shared private link resource is functional, and Azure Web PubSub Service can communicate over the private endpoint.
--
-At this point, the private endpoint between Azure Web PubSub Service and Azure Key Vault is established.
-
-Now you can configure features like custom domain as usual. **You don't have to use a special domain for Key Vault**. DNS resolution is automatically handled by Azure Web PubSub Service.
+Now you can configure features like a custom domain as usual. You don't have to use a special domain for Key Vault. The Azure Web PubSub Service automatically handles DNS resolution.
## Next steps Learn more:
-+ [What are private endpoints?](../private-link/private-endpoint-overview.md)
-+ [Configure custom domain](howto-custom-domain.md)
+* [What are private endpoints?](../private-link/private-endpoint-overview.md)
+* [Configure a custom domain](howto-custom-domain.md)
azure-web-pubsub Howto Secure Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-secure-shared-private-endpoints.md
Title: Secure Azure Web PubSub outbound traffic through Shared Private Endpoints
+ Title: Secure Azure Web PubSub outbound traffic through shared private endpoints
-description: How to secure Azure Web PubSub outbound traffic through Shared Private Endpoints to avoid traffic go to public network
+description: How to secure Azure Web PubSub outbound traffic through shared private endpoints
- - Previously updated : 11/08/2021+ Last updated : 03/27/2023
-# Secure Azure Web PubSub outbound traffic through Shared Private Endpoints
+# Secure Azure Web PubSub outbound traffic through shared private endpoints
-If you're using an [event handler](concept-service-internals.md#event-handler) in Azure Web PubSub Service, you might have outbound traffic to an upstream. Upstream such as
-Azure Web App and Azure Functions, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. You can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach these endpoints.
+If you're using an [event handler](concept-service-internals.md#event-handler) in Azure Web PubSub Service, you might have outbound traffic to upstream endpoints to an Azure Static Web App or an Azure Function. Azure Static Web Apps and Azure Functions can be configured with endpoints to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. You can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) in your Web PubSub services to reach these endpoints.
:::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-secure-shared-private-endpoints\shared-private-endpoint-overview.png" border="false" :::
-This outbound method is subject to the following requirements:
+This article shows you how to configure your Web PubSub service to send upstream calls to an Azure Function through a shared private endpoint rather than public network.
-+ The upstream must be Azure Web App or Azure Function.
+This outbound method is subject to the following requirements:
-+ The Azure Web PubSub Service service must be on the Standard or Premium tier.
+- The upstream endpoint must be Azure Web App or Azure Function.
+- The Azure Static Web PubSub Service service must be on the Standard or Premium tier.
+- The Azure Static Web App or Azure Function must be on certain SKUs. See [Use Private Endpoints for Azure Web App](../app-service/networking/private-endpoint.md).
-+ The Azure Web App or Azure Function must be on certain SKUs. See [Use Private Endpoints for Azure Web App](../app-service/networking/private-endpoint.md).
+Private endpoints of secured resources created through Azure Web PubSub Service APIs are referred to as *shared private link resources*. This term is used because you're "sharing" access to a resource, such as an Azure Function that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure Web PubSub service execution environment and aren't directly visible to you.
-## Shared Private Link Resources Management
+## Prerequisites
-Private endpoints of secured resources that are created through Azure Web PubSub Service APIs are referred to as *shared private link resources*. This term is used because you're "sharing" access to a resource, such as an Azure Function that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure Web PubSub Service execution environment and aren't directly visible to you.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Web PubSub Service instance.
+- An Azure Functions resource.
> [!NOTE]
-> The examples in this article are based on the following assumptions:
-> * The resource ID of this Azure Web PubSub Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub.
-> * The resource ID of upstream Azure Function is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func.
+> The examples in this article uses the following values:
+>
+> - The resource ID of this Azure Web PubSub Service is _/subscriptions//00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub.
+> - The resource ID of upstream Azure Function is _/subscriptions//00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func.
+> You will need to replace these values with your own subscription id, Web PubSub Service name, and Function name.
-The rest of the examples show how the _contoso-webpubsub_ service can be configured so that its upstream calls to function go through a private endpoint rather than public network.
-### Step 1: Create a shared private link resource to the function
+## Step 1: Create a shared private link resource to the function
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
1. In the Azure portal, go to your Azure Web PubSub Service resource.
-1. In the menu pane, select **Networking**. Switch to **Private access** tab.
+1. Select **Networking** from the menu.
+1. Select the **Private access** tab.
1. Select **Add shared private endpoint**. :::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-management.png" lightbox="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-management.png" :::
-1. Fill in a name for the shared private endpoint.
-1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID.
+1. Enter a name for the shared private endpoint.
+1. Choose your target linked sources by selecting **Select from your resources** or enter your resource ID by selecting **Specify resource ID**.
+1. Optionally, you may enter a **Request message** to be sent to the target resource owner.
1. Select **Add**. :::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-add.png" :::
-1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
+The shared private endpoint resource is **Provisioning state** is *Succeeded*. The **Connection state** is *Pending* until the endpoint is approved at the target resource.
:::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-added.png" lightbox="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-added.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
+
+You use the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource. Replace the values in the following example with your own values.
-You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
+```bash:
-```dotnetcli
+```bash
az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview --body @create-pe.json --debug ```
-The contents of the *create-pe.json* file, which represent the request body to the API, are as follows:
+The *create-pe.json* file contains the request body to the API. It is similar to the following example:
```json {
The contents of the *create-pe.json* file, which represent the request body to t
} ```
-The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following example.
+The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value similar to the following example.
```plaintext "Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview" ```
-You can poll this URI periodically to obtain the status of the operation.
+You can poll this URI periodically to obtain the status of the operation by manually querying the `Azure-AsyncOperationHeader` value.
-If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
-
-```dotnetcli
+```bash
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview ```
-Wait until the status changes to "Succeeded" before proceeding to the next steps.
+Wait until the status changes to "Succeeded" before proceeding to the next step.
--
-### Step 2a: Approve the private endpoint connection for the function
+## Step 2: Approve the private endpoint connection for the function
+
+When the shared private endpoint connection is in *Pending* state, you need to approve the connection request at the target resource.
> [!IMPORTANT] > After you approved the private endpoint connection, the Function is no longer accessible from public network. You may need to create other private endpoints in your own virtual network to access the Function endpoint.
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
-1. In the Azure portal, select the **Networking** tab of your Function App and navigate to **Private endpoint connections**. Select **Configure your private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+1. In the Azure portal, go to your Function App.
+1. Select the **Networking** from the menu
+1. Select the **Private endpoints** in the **Inbound Traffic** section.
+1. Select the pending connection that you created in your Web PubSub resource.
+1. Select **Approve** and **Yes** to confirm.
- :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints\portal-function-approve-private-endpoint.png" lightbox="media\howto-secure-shared-private-endpoints\portal-function-approve-private-endpoint.png" :::
-1. Select the private endpoint that Azure Web PubSub Service created. In the **Private endpoint** column, identify the private endpoint connection by the name that's specified in the previous API, select **Approve**.
-
- Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+You can select **Refresh** to check the status. It could take a few minutes for the status **Connection state** to update to *Approved*.
:::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints\portal-function-approved-private-endpoint.png" lightbox="media\howto-secure-shared-private-endpoints\portal-function-approved-private-endpoint.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
1. List private endpoint connections.
- ```dotnetcli
+ ```bash
az network private-endpoint-connection list -n <function-resource-name> -g <function-resource-group-name> --type 'Microsoft.Web/sites' ```
Wait until the status changes to "Succeeded" before proceeding to the next steps
1. Approve the private endpoint connection.
- ```dotnetcli
+ ```bash
az network private-endpoint-connection approve --id <private-endpoint-connection-id> ``` --
-### Step 2b: Query the status of the shared private link resource
+## Step 3: Query the status of the shared private link resource
+
+It takes a few minutes for the approval to be propagated to Azure Web PubSub Service. You can check the state using either Azure portal or Azure CLI.
-It takes minutes for the approval to be propagated to Azure Web PubSub Service. You can check the state using either Azure portal or Azure CLI.
+### [Azure portal](#tab/azure-portal)
-#### [Azure portal](#tab/azure-portal)
-
:::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-approved.png" lightbox="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-approved.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
-```dotnetcli
+```bash
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview ```
This command would return JSON, where the connection state would show up as "sta
```
-If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional, and Azure Web PubSub Service can communicate over the private endpoint.
+When the `properties.provisioningState` is `Succeeded` and `properties.status` (connection state) is `Approved`, the shared private link resource is functional, and Azure Web PubSub Service can communicate over the private endpoint.
--
-At this point, the private endpoint between Azure SignalR Service and Azure Function is established.
+At this point, the private endpoint between Azure Web PubSub Service and Azure Function is established.
-### Step 3: Verify upstream calls are from a private IP
+### Step 4: Verify upstream calls are from a private IP
Once the private endpoint is set up, you can verify incoming calls are from a private IP by checking the `X-Forwarded-For` header at upstream side.
Once the private endpoint is set up, you can verify incoming calls are from a pr
Learn more about private endpoints:
-+ [What are private endpoints?](../private-link/private-endpoint-overview.md)
+[What are private endpoints?](../private-link/private-endpoint-overview.md)
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-overview.md
Title: What is Azure Backup? description: Provides an overview of the Azure Backup service, and how it contributes to your business continuity and disaster recovery (BCDR) strategy. Previously updated : 03/11/2022 Last updated : 04/01/2023
Azure Backup delivers these key benefits:
## How Azure Backup protects from ransomware?
-Azure Backup helps protect your critical business systems and backup data against a ransomware attack by implementing preventive measures and providing tools that protect your organization from every step that attackers take to infiltrate your systems. It provides security to your backup environment, both when your data is in transit and at rest. [Learn more](../security/fundamentals/backup-plan-to-protect-against-ransomware.md)
+Azure Backup helps protect your critical business systems and backup data against a ransomware attack by implementing preventive measures and providing tools that protect your organization from every step that attackers take to infiltrate your systems. It provides security to your backup environment, both when your data is in transit and at rest.
+
+In addition to various security features offered by default, you can also leverage several enhanced features that can provide you with highest levels of security for your backed-up data. Learn more about [security in Azure Backup](security-overview.md). Also, [learn](../security/fundamentals/backup-plan-to-protect-against-ransomware.md) about how backups can help you protect backups against ransomware better and how Azure helps you ensure rapid recovery.
## Next steps
backup Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-overview.md
Title: Overview of security features description: Learn about security capabilities in Azure Backup that help you protect your backup data and meet the security needs of your business. Previously updated : 03/12/2020 Last updated : 03/31/2023
Azure Backup has several security controls built into the service to prevent, de
## Separation between guest and Azure storage
-With Azure Backup, which includes virtual machine backup and SQL and SAP HANA in VM backup, the backup data is stored in Azure storage and the guest has no direct access to backup storage or its contents. With virtual machine backup, the backup snapshot creation and storage is done by Azure fabric where the guest has no involvement other than quiescing the workload for application consistent backups. With SQL and SAP HANA, the backup extension gets temporary access to write to specific blobs. In this way, even in a compromised environment, existing backups can't be tampered with or deleted by the guest.
+With Azure Backup, which includes virtual machine backup and SQL and SAP HANA in VM backup, the backup data is stored in Azure storage and the guest has no direct access to backup storage or its contents. With the virtual machine backup, the backup snapshot creation and storage are done by Azure fabric where the guest has no involvement other than quiescing the workload for application consistent backups. With SQL and SAP HANA, the backup extension gets temporary access to write to specific blobs. In this way, even in a compromised environment, existing backups can't be tampered with or deleted by the guest.
## Internet connectivity not required for Azure VM backup
Encryption protects your data and helps you to meet your organizational security
* When data is backed up from on-premises servers with the MARS agent, data is encrypted with a passphrase before upload to Azure Backup and decrypted only after it's downloaded from Azure Backup. Read more about [security features to help protect hybrid backups](#security-features-to-help-protect-hybrid-backups).
-## Protection of backup data from unintentional deletes
+## Soft delete
-Azure Backup provides security features to help protect backup data even after deletion. With soft delete, if user deletes the backup of a VM, the backup data is retained for 14 additional days, allowing the recovery of that backup item with no data loss. The additional 14 days retention of backup data in the "soft delete" state doesn't incur any cost to you. [Learn more about soft delete](backup-azure-security-feature-cloud.md).
+Azure Backup provides security features to help protect the backup data even after deletion. With soft delete, if you delete the backup of a VM, the backup data is retained for *14 additional days*, allowing the recovery of that backup item with no data loss. The additional *14 days retention of backup data in the "soft delete state* doesn't incur any cost. [Learn more about soft delete](backup-azure-security-feature-cloud.md).
+
+Azure Backup has now also enhanced soft delete to further improve chances of recovering data after deletion. [Learn more](#enhanced-soft-delete).
+
+## Immutable vaults
+
+Immutable vault can help you protect your backup data by blocking any operations that could lead to loss of recovery points. Further, you can lock the immutable vault setting to make it irreversible that can prevent any malicious actors from disabling immutability and deleting backups. [Learn more about immutable vaults](backup-azure-immutable-vault-concept.md).
+
+## Multi-user authorization
+
+Multi-user authorization (MUA) for Azure Backup allows you to add an additional layer of protection to critical operations on your Recovery Services vaults and Backup vaults. For MUA, Azure Backup uses another Azure resource called the Resource Guard to ensure critical operations are performed only with applicable authorization. [Learn more about multi-user authorization for Azure Backup](multi-user-authorization-concept.md).
+
+## Enhanced soft delete
+
+Enhanced soft delete provides you with the ability to recover your data even after it's deleted, accidentally or maliciously. It works by delaying the permanent deletion of data by a specified duration, providing you with an opportunity to retrieve it. You can also make soft delete *always-on* to prevent it from being disabled. [Learn more about enhanced soft delete for Backup](backup-azure-enhanced-soft-delete-about.md).
## Monitoring and alerts of suspicious activity
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
Title: Use simplified compute node communication description: Learn about the simplified compute node communication mode in the Azure Batch service and how to enable it. Previously updated : 11/17/2022 Last updated : 03/29/2023
An Azure Batch pool contains one or more compute nodes that execute user-specified workloads in the form of Batch tasks. To enable Batch functionality and Batch pool infrastructure management, compute nodes must communicate with the Azure Batch service. Batch supports two types of node communication modes:-- `Classic` where the Batch service initiates communication to the compute nodes-- `Simplified` where the compute nodes initiate communication to the Batch service
+- Classic: where the Batch service initiates communication to the compute nodes
+- Simplified: where the compute nodes initiate communication to the Batch service
This document describes the simplified compute node communication mode and the associated network configuration requirements.
This document describes the simplified compute node communication mode and the a
> Batch pools with [no public IP addresses](simplified-node-communication-pool-no-public-ip.md) using the > node management private endpoint without Internet outbound access.
+> [!WARNING]
+> The classic compute node communication model will be retired on **31 March 2026** and will be replaced with
+> the simplified compute node communication model as described in this document. For more information, see the
+> classic compute node communication mode
+> [migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md).
+ ## Supported regions Simplified compute node communication in Azure Batch is currently available for the following regions:
Simplified compute node communication in Azure Batch is currently available for
- China: all China regions where Batch is present except for China North 1 and China East 1.
-## Compute node communication differences between Classic and Simplified
+## Compute node communication differences between classic and simplified modes
The simplified compute node communication mode streamlines the way Batch pool infrastructure is managed on behalf of users. This communication mode reduces the complexity and scope of inbound
NSGs, UDRs, and firewalls:
Outbound requirements for a Batch account can be discovered using the [List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints)
-This API will report the base set of dependencies, depending upon the Batch account pool communication mode.
+This API reports the base set of dependencies, depending upon the Batch account pool communication mode.
User-specific workloads may need extra rules such as opening traffic to other Azure resources (such as Azure Storage for Application Packages, Azure Container Registry, etc.) or endpoints like the Microsoft package repository for virtual file system mounting functionality.
example, you can scope your outbound communication rules to Azure Storage to ena
storage accounts or other storage accounts for resource files or output files. Even if your workloads aren't currently impacted by the changes (as described in the next section), it's
-recommended to move to the `simplified` mode. Doing so will ensure your Batch workloads are ready for any
-future improvements enabled by this mode, and also for when this communication mode will move to become
-the default.
+recommended to move to the `simplified` mode. Future improvements in the Batch service may only be functional
+with simplified compute node communication.
## Potential impact between classic and simplified communication modes
-In many cases, the `simplified` communication mode won't directly affect your Batch workloads. However,
-simplified compute node communication will have an impact for the following cases:
+In many cases, the `simplified` communication mode doesn't directly affect your Batch workloads. However,
+simplified compute node communication has an impact for the following cases:
- Users who specify a Virtual Network as part of creating a Batch pool and do one or both of the following actions: - Explicitly disable outbound network traffic rules that are incompatible with simplified compute node communication.
The following set of steps is required to migrate to the new communication mode:
- Outbound: - Destination port 443 over TCP to Storage.*region* - Destination port 443 over ANY to BatchNodeManagement.*region*
-1. If you have any other inbound or outbound scenarios required by your workflow, you'll need to ensure that your rules reflect these requirements.
+1. If you have any other inbound or outbound scenarios required by your workflow, you need to ensure that your rules reflect these requirements.
1. Use one of the following options to update your workloads to use the new communication mode. - Create new pools with the `targetNodeCommunicationMode` set to `simplified` and validate that the new pools are working correctly. Migrate your workload to the new pools and delete any earlier pools. - Update existing pools `targetNodeCommunicationMode` property to `simplified` and then resize all existing pools to zero nodes and scale back out.
The following set of steps is required to migrate to the new communication mode:
- Outbound: - Destination port 443 over ANY to BatchNodeManagement.*region*
-If you follow these steps, but later want to switch back to `classic` compute node communication, you'll need to take the following actions:
+If you follow these steps, but later want to switch back to `classic` compute node communication, you need to take the following actions:
+1. Revert any networking configuration operating exclusively in `simplified` compute node communication mode.
1. Create new pools or update existing pools `targetNodeCommunicationMode` property set to `classic`. 1. Migrate your workload to these pools, or resize existing pools and scale back out (see step 3 above). 1. See step 4 above to confirm that your pools are operating in `classic` communication mode.
-1. Optionally revert your networking configuration.
+1. Optionally restore your networking configuration.
## Specifying the node communication mode on a Batch pool
-Below are examples of how to create a Batch pool with `simplified` compute node communication.
+The [`targetNodeCommunicationMode`](/rest/api/batchservice/pool/add) property on Batch pools allows you to indicate a preference
+to the Batch service for which communication mode to utilize between the Batch service and compute nodes. The following are
+the allowable options on this property:
+
+- `classic`: create the pool using classic compute node communication.
+- `simplified`: create the pool using simplified compute node communication.
+- `default`: allow the Batch service to select the appropriate compute node communication mode. For pools without a virtual
+network, the pool may be created in either `classic` or `simplified` mode. For pools with a virtual network, the pool will always
+default to `classic` until **30 September 2024**. For more information, see the classic compute node communication mode
+[migration guide](batch-pools-to-simplified-compute-node-communication-model-migration-guide.md).
> [!TIP] > Specifying the target node communication mode is a preference indication for the Batch service and not a guarantee that it > will be honored. Certain configurations on the pool may prevent the Batch service from honoring the specified target node > communication mode, such as interaction with No public IP address, virtual networks, and the pool configuration type.
+The following are examples of how to create a Batch pool with `simplified` compute node communication.
+ ### Azure portal Navigate to the Pools blade of your Batch account and click the Add button. Under `OPTIONAL SETTINGS`, you can
select `Simplified` as an option from the pull-down of `Node communication mode`
:::image type="content" source="media/simplified-compute-node-communication/add-pool-simplified-mode.png" alt-text="Screenshot that shows creating a pool with simplified mode."::: To update an existing pool to simplified communication mode, navigate to the Pools blade of your Batch account and
-click on the pool to update. On the left-side navigation, select `Node communication mode`. There you'll be able
+click on the pool to update. On the left-side navigation, select `Node communication mode`. There you're able
to select a new target node communication mode as shown below. After selecting the appropriate communication mode,
-click the `Save` button to update. You'll need to scale the pool down to zero nodes first, and then back out
+click the `Save` button to update. You need to scale the pool down to zero nodes first, and then back out
for the change to take effect, if conditions allow. :::image type="content" source="media/simplified-compute-node-communication/update-pool-simplified-mode.png" alt-text="Screenshot that shows updating a pool to simplified mode.":::
if specified on the pool. For more information, see the
[migration guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md). - Cloud Service Configuration pools are currently not supported for simplified compute node communication and are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
-Specifying a communication mode for these types of pools aren't honored and will always result in `classic`
+Specifying a communication mode for these types of pools aren't honored and always results in `classic`
communication mode. We recommend using Virtual Machine Configuration for your Batch pools. For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
cloud-shell Cloud Shell Predictive Intellisense https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-predictive-intellisense.md
Title: Predictive IntelliSense in Azure Cloud Shell description: Azure Cloud Shell uses Predictive IntelliSense------- Last updated 10/11/2022-+
+ Title: Predictive IntelliSense in Azure Cloud Shell
# Predictive IntelliSense in Azure Cloud Shell
open-source editor to edit the profile. To learn more, see [Azure Cloud Shell ed
Use the built-in Cloud Shell editor to edit the profile: ```powershell
-Code $Profile
+code $Profile
``` ## Next steps
For more information on PowerShell profiles, see [About_Profiles][06].
[03]: /powershell/module/psreadline/set-psreadlineoption [04]: ./using-cloud-shell-editor.md [05]: /powershell/scripting/learn/shell/using-predictors
-[06]: /powershell/module/microsoft.powershell.core/about/about_profiles
+[06]: /powershell/module/microsoft.powershell.core/about/about_profiles
cloud-shell Embed Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/embed-cloud-shell.md
- description: Learn to embed Azure Cloud Shell.-- ms.contributor: jahelmic Last updated 11/14/2022- -- tags: azure-resource-manager Title: Embed Azure Cloud Shell
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
- description: Overview of features in Azure Cloud Shell-- ms.contributor: jahelmic Last updated 03/03/2023- -- tags: azure-resource-manager Title: Azure Cloud Shell features
cloud-shell Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/limitations.md
- description: Overview of limitations of Azure Cloud Shell-- ms.contributor: jahelmic Last updated 03/03/2023- -- tags: azure-resource-manager Title: Azure Cloud Shell limitations
cloud-shell Msi Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/msi-authorization.md
- description: How to acquire a token for the authenticated user in Azure Cloud Shell-- ms.contributor: jahelmic Last updated 11/14/2022- -- tags: azure-resource-manager Title: Acquiring a user token in Azure Cloud Shell
cloud-shell Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/overview.md
- description: Overview of the Azure Cloud Shell.-- ms.contributor: jahelmic Last updated 03/03/2023- - tags: azure-resource-manager Title: Azure Cloud Shell overview
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
- description: Walkthrough of how Azure Cloud Shell persists files.-- ms.contributor: jahelmic Last updated 11/14/2022- -- tags: azure-resource-manager Title: Persist files in Azure Cloud Shell
cloud-shell Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/pricing.md
- description: Overview of pricing of Azure Cloud Shell-- ms.contributor: jahelmic Last updated 11/14/2022- - tags: azure-resource-manager Title: Azure Cloud Shell pricing
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md
- description: Deploy Cloud Shell into an Azure virtual network-- ms.contributor: jahelmic Last updated 11/14/2022- -- tags: azure-resource-manager Title: Cloud Shell in an Azure virtual network
cloud-shell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/quickstart.md
- description: Learn how to start using Azure Cloud Shell.-- ms.contributor: jahelmic Last updated 03/06/2023- - tags: azure-resource-manager Title: Quickstart for Azure Cloud Shell
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
- description: This article covers troubleshooting Cloud Shell common scenarios.-- ms.contributor: jahelmic Last updated 11/14/2022- -- tags: azure-resource-manager Title: Azure Cloud Shell troubleshooting
cloud-shell Using Cloud Shell Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/using-cloud-shell-editor.md
- description: Overview of how to use the Azure Cloud Shell editor.-- ms.contributor: jahelmic Last updated 11/14/2022- -- tags: azure-resource-manager Title: Using the Azure Cloud Shell editor
cloud-shell Using The Shell Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/using-the-shell-window.md
- description: Overview of how to use the Azure Cloud Shell window.-- ms.contributor: jahelmic Last updated 11/14/2022- -- tags: azure-resource-manager Title: Using the Azure Cloud Shell window
cognitive-services Custom Commands Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands-encryption-of-data-at-rest.md
# Custom Commands encryption of data at rest + Custom Commands automatically encrypts your data when it is persisted to the cloud. The Custom Commands service encryption protects your data and to help you to meet your organizational security and compliance commitments. > [!NOTE]
cognitive-services Custom Commands References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands-references.md
# Custom Commands concepts and definitions + This article serves as a reference for concepts and definitions for Custom Commands applications. ## Commands configuration
cognitive-services Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-commands.md
# What is Custom Commands? + Applications such as [Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech-to-text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text-to-speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object.
-**Custom Commands** makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
+Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
-Custom Commands is best suited for task completion or command-and-control scenarios, and well matched for Internet of Things (IoT) devices, ambient and headless devices. Examples include solutions for Hospitality, Retail and Automotive industries, where you want voice-controlled experiences for your guests, in-store inventory management or in-car functionality.
+Custom Commands is best suited for task completion or command-and-control scenarios such as "Turn on the overhead light" or "Make it 5 degrees warmer". Custom Commands is well suited for Internet of Things (IoT) devices, ambient and headless devices. Examples include solutions for Hospitality, Retail and Automotive industries, where you want voice-controlled experiences for your guests, in-store inventory management or in-car functionality.
If you're interested in building complex conversational apps, you're encouraged to try the Bot Framework using the [Virtual Assistant Solution](/azure/bot-service/bot-builder-enterprise-template-overview). You can add voice to any bot framework bot using Direct Line Speech.
Once you're done with the quickstart, explore our how-to guides for detailed ste
## Next steps * [View our Voice Assistants repo on GitHub for samples](https://aka.ms/speech/cc-samples)
-* [Go to the Speech Studio to try out Custom Commands](https://speech.microsoft.com/customcommands)
+* [Go to the Speech Studio to try out Custom Commands](https://aka.ms/speechstudio/customcommands)
* [Get the Speech SDK](speech-sdk.md)
cognitive-services How To Custom Commands Debug Build Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-debug-build-time.md
# Debug errors when authoring a Custom Commands application + This article describes how to debug when you see errors while building Custom Commands application. ## Errors when creating an application
cognitive-services How To Custom Commands Debug Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-debug-runtime.md
# Troubleshoot a Custom Commands application at runtime + This article describes how to debug when you see errors while running Custom Commands application. ## Connection failed
If your run Custom Commands application from [client application (with Speech SD
| [1002](#error-1002) | The server returned status code '404' when status code '101' was expected. | ### Error 401-- The region specified in client application does not match with the region of the custom command application
+- The region specified in client application doesn't match with the region of the custom command application
- Speech resource Key is invalid Make sure your speech resource key is correct. ### Error 1002 -- Your custom command application is not published
+- Your custom command application isn't published
Publish your application in the portal. -- Your custom command applicationId is not valid
+- Your custom command applicationId isn't valid
Make sure your custom command application ID is correct. custom command application outside your speech resource
For more information on troubleshooting the connection issues, reference [Window
## Dialog is canceled
-When running your Custom Commands application, the dialog would be canceled when some errors occur.
+When your Custom Commands application is running, the dialog would be canceled when some errors occur.
-- If you are testing the application in the portal, it would directly display the cancellation description and play out an error earcon.
+- If you're testing the application in the portal, it would directly display the cancellation description and play out an error earcon.
-- If you are running the application with [Windows Voice Assistant Client](./how-to-custom-commands-developer-flow-test.md), it would play out an error earcon. You can find the **Event: CancelledDialog** under the **Activity Logs**.
+- If you're running the application with [Windows Voice Assistant Client](./how-to-custom-commands-developer-flow-test.md), it would play out an error earcon. You can find the **Event: CancelledDialog** under the **Activity Logs**.
-- If you are following our client application example [client application (with Speech SDK)](./how-to-custom-commands-setup-speech-sdk.md), it would play out an error earcon. You can find the **Event: CancelledDialog** under the **Status**.
+- If you're following our client application example [client application (with Speech SDK)](./how-to-custom-commands-setup-speech-sdk.md), it would play out an error earcon. You can find the **Event: CancelledDialog** under the **Status**.
-- If you are building your own client application, you can always design your desired logics to handle the CancelledDialog events.
+- If you're building your own client application, you can always design your desired logics to handle the CancelledDialog events.
The CancelledDialog event consists of cancellation code and description, as listed below:
The CancelledDialog event consists of cancellation code and description, as list
| [MaxTurnThresholdReached](#no-progress-was-made-after-the-max-number-of-turns-allowed) | No progress was made after the max number of turns allowed | | [RecognizerQuotaExceeded](#recognizer-usage-quota-exceeded) | Recognizer usage quota exceeded | | [RecognizerConnectionFailed](#connection-to-the-recognizer-failed) | Connection to the recognizer failed |
-| [RecognizerUnauthorized](#this-application-cannot-be-accessed-with-the-current-subscription) | This application cannot be accessed with the current subscription |
+| [RecognizerUnauthorized](#this-application-cant-be-accessed-with-the-current-subscription) | This application can't be accessed with the current subscription |
| [RecognizerInputExceededAllowedLength](#input-exceeds-the-maximum-supported-length) | Input exceeds the maximum supported length for the recognizer | | [RecognizerNotFound](#recognizer-not-found) | Recognizer not found | | [RecognizerInvalidQuery](#invalid-query-for-the-recognizer) | Invalid query for the recognizer | | [RecognizerError](#recognizer-return-an-error) | Recognizer returns an error | ### No progress was made after the max number of turns allowed
-The dialog is canceled when a required slot is not successfully updated after certain number of turns. The build-in max number is 3.
+The dialog is canceled when a required slot isn't successfully updated after certain number of turns. The build-in max number is 3.
### Recognizer usage quota exceeded Language Understanding (LUIS) has limits on resource usage. Usually "Recognizer usage quota exceeded error" can be caused by:
Language Understanding (LUIS) has limits on resource usage. Usually "Recognizer
Add a prediction resource to your Custom Commands application: 1. go to **Settings**, LUIS resource
- 1. Choose a prediction resource from **Prediction resource**, or click **Create new resource**
+ 1. Choose a prediction resource from **Prediction resource**, or select **Create new resource**
- Your LUIS prediction resource exceeds the limit
For more details on LUIS resource limits, refer [Language Understanding resource
### Connection to the recognizer failed Usually it means transient connection failure to Language Understanding (LUIS) recognizer. Try it again and the issue should be resolved.
-### This application cannot be accessed with the current subscription
-Your subscription is not authorized to access the LUIS application.
+### This application can't be accessed with the current subscription
+Your subscription isn't authorized to access the LUIS application.
### Input exceeds the maximum supported length Your input has exceeded 500 characters. We only allow at most 500 characters for input utterance.
Your input has exceeded 500 characters. We only allow at most 500 characters for
The LUIS recognizer returned an error when trying to recognize your input. ### Recognizer not found
-Cannot find the recognizer type specified in your custom commands dialog model. Currently, we only support [Language Understanding (LUIS) Recognizer](https://www.luis.ai/).
+Can't find the recognizer type specified in your custom commands dialog model. Currently, we only support [Language Understanding (LUIS) Recognizer](https://www.luis.ai/).
## Other common errors ### Unexpected response
Unexpected responses may be caused multiple things.
A few checks to start with: - Yes/No Intents in example sentences
- As we currently don't support Yes/No Intents except when using with confirmation feature. All the Yes/No Intents defined in example sentences would not be detected.
+ As we currently don't support Yes/No Intents except when using with confirmation feature. All the Yes/No Intents defined in example sentences wouldn't be detected.
- Similar intents and examples sentences among commands
cognitive-services How To Custom Commands Deploy Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-deploy-cicd.md
# Continuous Deployment with Azure DevOps + In this article, you learn how to set up continuous deployment for your Custom Commands applications. The scripts to support the CI/CD workflow are provided to you. ## Prerequisite
cognitive-services How To Custom Commands Developer Flow Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-developer-flow-test.md
# Test your Custom Commands Application + In this article, you learn different approaches to testing a custom commands application. ## Test in the portal
cognitive-services How To Custom Commands Send Activity To Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-send-activity-to-client.md
# Send Custom Commands activity to client application + In this article, you learn how to send activity from a Custom Commands application to a client application running the Speech SDK. You complete the following tasks:
cognitive-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md
# Integrate with a client application using Speech SDK + In this article, you learn how to make requests to a published Custom Commands application from the Speech SDK running in an UWP application. In order to establish a connection to the Custom Commands application, you need: - Publish a Custom Commands application and get an application identifier (App ID)
cognitive-services How To Custom Commands Setup Web Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-web-endpoints.md
# Set up web endpoints
-In this article, you will learn how to setup web endpoints in a Custom Commands application that allow you to make HTTP requests from a client application. You will complete the following tasks:
+
+In this article, you'll learn how to set up web endpoints in a Custom Commands application that allow you to make HTTP requests from a client application. You'll complete the following tasks:
- Set up web endpoints in Custom Commands application - Call web endpoints in Custom Commands application
Alternatively, the next section provides details about a default hosted web endp
### Input format of Azure function
-Next, you will deploy an endpoint using [Azure Functions](../../azure-functions/index.yml).
+Next, you'll deploy an endpoint using [Azure Functions](../../azure-functions/index.yml).
The following is the format of a Custom Commands event that is passed to your Azure function. Use this information when you're writing your Azure Function app. ```json
The following table describes the key attributes of this input:
| Attribute | Explanation | | - | |
-| **conversationId** | The unique identifier of the conversation. Note that this ID can be generated by the client app. |
+| **conversationId** | The unique identifier of the conversation. Note this ID can be generated by the client app. |
| **currentCommand** | The command that's currently active in the conversation. | | **name** | The name of the command. The `parameters` attribute is a map with the current values of the parameters. | | **currentGlobalParameters** | A map like `parameters`, but used for global parameters. |
This output should be written to an external storage, so that you can maintain t
We provide a sample you can configure and deploy as an Azure Functions app. To create a storage account for our sample, follow these steps. 1. Create table storage to save device state. In the Azure portal, create a new resource of type **Storage account** by name **devicestate**.
-1. Copy the **Connection string** value from **devicestate -> Access keys**. You will need to add this string secret to the downloaded sample Function App code.
+1. Copy the **Connection string** value from **devicestate -> Access keys**. You'll need to add this string secret to the downloaded sample Function App code.
1. Download sample [Function App code](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/tree/main/custom-commands/quick-start). 1. Open the downloaded solution in Visual Studio 2019. In **Connections.json**, replace **STORAGE_ACCOUNT_SECRET_CONNECTION_STRING** with the secret from Step 2. 1. Download the **DeviceStateAzureFunction** code.
To deploy the sample app to Azure Functions, follow these steps.
## Setup web endpoints in Custom Commands Let's hook up the Azure function with the existing Custom Commands application.
-In this section, you will use an existing default **DeviceState** endpoint. If you created your own web endpoint using Azure Function or otherwise, use that instead of the default `https://webendpointexample.azurewebsites.net/api/DeviceState`.
+In this section, you'll use an existing default **DeviceState** endpoint. If you created your own web endpoint using Azure Function or otherwise, use that instead of the default `https://webendpointexample.azurewebsites.net/api/DeviceState`.
1. Open the Custom Commands application you previously created.
-1. Go to **Web endpoints**, click **New web endpoint**.
+1. Go to **Web endpoints**, select **New web endpoint**.
> [!div class="mx-imgBorder"] > ![New web endpoint](media/custom-commands/setup-web-endpoint-new-endpoint.png)
Add the following XML to `MainPage.xaml` above the **EnableMicrophoneButton** bl
### Sync device state
-In `MainPage.xaml.cs`, add the reference `using Windows.Web.Http;`. Add the following code to the `MainPage` class. This method will send a GET request to the example endpoint, and extract the current device state for your app. Make sure to change `<your_app_name>` to what you used in the **header** in Custom Command web endpoint.
+In `MainPage.xaml.cs`, add the reference `using Windows.Web.Http;`. Add the following code to the `MainPage` class. This method sends a GET request to the example endpoint, and extract the current device state for your app. Make sure to change `<your_app_name>` to what you used in the **header** in Custom Command web endpoint.
```C# private async void SyncDeviceState_ButtonClicked(object sender, RoutedEventArgs e)
private async void SyncDeviceState_ButtonClicked(object sender, RoutedEventArgs
## Try it out 1. Start the application.
-1. Click Sync Device State.\
+1. Select Sync Device State.\
If you tested out the app with `turn on tv` in previous section, you would see the TV shows as **on**. > [!div class="mx-imgBorder"] > ![Sync device state](media/custom-commands/setup-web-endpoint-sync-device-state.png)
cognitive-services How To Custom Commands Update Command From Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-client.md
# Update a command from a client app + In this article, you'll learn how to update an ongoing command from a client application. ## Prerequisites
cognitive-services How To Custom Commands Update Command From Web Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-web-endpoint.md
# Update a command from a web endpoint + If your client application requires an update to the state of an ongoing command without voice input, you can use a call to a web endpoint to update the command. In this article, you'll learn how to update an ongoing command from a web endpoint.
cognitive-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
# Develop Custom Commands applications + In this how-to article, you learn how to develop and configure Custom Commands applications. The Custom Commands feature helps you build rich voice-command apps that are optimized for voice-first interaction experiences. The feature is best suited to task completion or command-and-control scenarios. It's particularly well suited for Internet of Things (IoT) devices and for ambient and headless devices. In this article, you create an application that can turn a TV on and off, set the temperature, and set an alarm. After you create these basic commands, you'll learn about the following options for customizing commands:
cognitive-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-logging.md
Logging to file is an optional feature for the Speech SDK. During development lo
## Sample
-The log file name is specified on a configuration object. Taking the `SpeechConfig` as an example and assuming that you have created an instance called `config`:
+The log file name is specified on a configuration object. Taking the `SpeechConfig` as an example and assuming that you've created an instance called `speechConfig`:
```csharp
-config.SetProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
+speechConfig.SetProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
``` ```java
-config.setProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
+speechConfig.setProperty(PropertyId.Speech_LogFilename, "LogfilePathAndName");
``` ```C++
-config->SetProperty(PropertyId::Speech_LogFilename, "LogfilePathAndName");
+speechConfig->SetProperty(PropertyId::Speech_LogFilename, "LogfilePathAndName");
``` ```Python
-config.set_property(speechsdk.PropertyId.Speech_LogFilename, "LogfilePathAndName")
+speech_config.set_property(speechsdk.PropertyId.Speech_LogFilename, "LogfilePathAndName")
``` ```objc
-[config setPropertyTo:@"LogfilePathAndName" byId:SPXSpeechLogFilename];
+[speechConfig setPropertyTo:@"LogfilePathAndName" byId:SPXSpeechLogFilename];
``` ```go import ("github.com/Microsoft/cognitive-services-speech-sdk-go/common")
-config.SetProperty(common.SpeechLogFilename, "LogfilePathAndName")
+speechConfig.SetProperty(common.SpeechLogFilename, "LogfilePathAndName")
```
-You can create a recognizer from the config object. This will enable logging for all recognizers.
+You can create a recognizer from the configuration object. This will enable logging for all recognizers.
> [!NOTE]
-> If you create a `SpeechSynthesizer` from the config object, it will not enable logging. If logging is enabled though, you will also receive diagnostics from the `SpeechSynthesizer`.
+> If you create a `SpeechSynthesizer` from the configuration object, it will not enable logging. If logging is enabled though, you will also receive diagnostics from the `SpeechSynthesizer`.
+
+JavaScript is an exception where the logging is enabled via SDK diagnostics as shown in the following code snippet:
+
+```javascript
+sdk.Diagnostics.SetLoggingLevel(sdk.LogLevel.Debug);
+sdk.Diagnostics.SetLogOutputPath("LogfilePathAndName");
+```
## Create a log file on different platforms
UWP applications need to be places log files in one of the application data loca
```csharp StorageFolder storageFolder = ApplicationData.Current.LocalFolder; StorageFile logFile = await storageFolder.CreateFileAsync("logfile.txt", CreationCollisionOption.ReplaceExisting);
-config.SetProperty(PropertyId.Speech_LogFilename, logFile.Path);
+speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile.Path);
``` Within a Unity UWP application, a log file can be created using the application persistent data path folder as follows:
Within a Unity UWP application, a log file can be created using the application
```csharp #if ENABLE_WINMD_SUPPORT string logFile = Application.persistentDataPath + "/logFile.txt";
- config.SetProperty(PropertyId.Speech_LogFilename, logFile);
+ speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile);
#endif ``` For more about file access permissions in UWP applications, see [File access permissions](/windows/uwp/files/file-access-permissions).
You can save a log file to either internal storage, external storage, or the cac
```java File dir = context.getExternalFilesDir(null); File logFile = new File(dir, "logfile.txt");
-config.setProperty(PropertyId.Speech_LogFilename, logFile.getAbsolutePath());
+speechConfig.setProperty(PropertyId.Speech_LogFilename, logFile.getAbsolutePath());
``` The code above will save a log file to the external storage in the root of an application-specific directory. A user can access the file with the file manager (usually in `Android/data/ApplicationName/logfile.txt`). The file will be deleted when the application is uninstalled.
Within a Unity Android application, the log file can be created using the applic
```csharp string logFile = Application.persistentDataPath + "/logFile.txt";
-config.SetProperty(PropertyId.Speech_LogFilename, logFile);
+speechConfig.SetProperty(PropertyId.Speech_LogFilename, logFile);
``` In addition, you need to also set write permission in your Unity Player settings for Android to "External (SDCard)". The log will be written to a directory you can get using a tool such as AndroidStudio Device File Explorer. The exact directory path may vary between Android devices,
To access a created file, add the below properties to the `Info.plist` property
<true/> ```
-If you are using Swift on iOS, please use the following code snippet to enable logs:
+If you're using Swift on iOS, please use the following code snippet to enable logs:
```swift let documentsDirectoryPathString = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first! let documentsDirectoryPath = NSURL(string: documentsDirectoryPathString)!
More about iOS File System is available [here](https://developer.apple.com/libra
Although a log file output path is specified as a configuration property into a `SpeechRecognizer` or other SDK object, SDK logging is a singleton, *process-wide* facility with no concept of individual instances. You can think of this as the `SpeechRecognizer` constructor (or similar) implicitly calling a static and internal "Configure Global Logging" routine with the property data available in the corresponding `SpeechConfig`.
-This means that you cannot, as an example, configure six parallel recognizers to output simultaneously to six separate files. Instead, the latest recognizer created will configure the global logging instance to output to the file specified in its configuration properties and all SDK logging will be emitted to that file.
+This means that you can't, as an example, configure six parallel recognizers to output simultaneously to six separate files. Instead, the latest recognizer created will configure the global logging instance to output to the file specified in its configuration properties and all SDK logging will be emitted to that file.
-This also means that the lifetime of the object that configured logging is not tied to the duration of logging. Logging will not stop in response to the release of an SDK object and will continue as long as no new logging configuration is provided. Once started, process-wide logging may be stopped by setting the log file path to an empty string when creating a new object.
+This also means that the lifetime of the object that configured logging isn't tied to the duration of logging. Logging will not stop in response to the release of an SDK object and will continue as long as no new logging configuration is provided. Once started, process-wide logging may be stopped by setting the log file path to an empty string when creating a new object.
To reduce potential confusion when configuring logging for multiple instances, it may be useful to abstract control of logging from objects doing real work. An example pair of helper routines:
cognitive-services Logging Audio Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/logging-audio-transcription.md
+
+ Title: How to log audio and transcriptions for speech recognition
+
+description: Learn how to use audio and transcription logging for speech-to-text and speech translation.
++++++ Last updated : 03/28/2023+
+zone_pivot_groups: programming-languages-speech-services-nomore-variant
++
+# How to log audio and transcriptions for speech recognition
+
+You can enable logging for both audio input and recognized speech when using [speech-to-text](get-started-speech-to-text.md) or [speech translation](get-started-speech-to-text.md). For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged. This article describes how to enable, access and delete the audio and transcription logs.
+
+Audio and transcription logs can be used as input for [Custom Speech](custom-speech-overview.md) model training. You might have other use cases.
+
+> [!WARNING]
+> Don't depend on audio and transcription logs when the exact record of input audio is required. In the periods of peak load, the service prioritizes hardware resources for transcription tasks. This may result in minor parts of the audio not being logged. Such occasions are rare, but nevertheless possible.
+
+Logging is done asynchronously for both base and custom model endpoints. Audio and transcription logs are stored by the Speech service and not written locally. The logs are retained for 30 days. After this period, the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time.
+
+## Enable audio and transcription logging
+
+Logging is disabled by default. Logging can be enabled [per recognition session](#enable-logging-for-a-single-recognition-session) or [per custom model endpoint](#enable-audio-and-transcription-logging-for-a-custom-model-endpoint).
+
+### Enable logging for a single recognition session
+
+You can enable logging for a single recognition session, whether using the default base model or [custom model](how-to-custom-speech-deploy-model.md) endpoint.
+
+> [!WARNING]
+> For custom model endpoints, the logging setting of your deployed endpoint is prioritized over your session-level setting (SDK or REST API). If logging is enabled for the custom model endpoint, the session-level setting (whether it's set to true or false) is ignored. If logging isn't enabled for the custom model endpoint, the session-level setting determines whether logging is active.
+
+#### Enable logging for speech-to-text with the Speech SDK
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging()` of the [SpeechConfig](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig) class instance.
+
+```csharp
+speechConfig.EnableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/dotnet/api/microsoft.cognitiveservices.speech.propertyid):
+
+```csharp
+string isAudioLoggingEnabled = speechConfig.GetProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [SpeechRecognizer](/dotnet/api/microsoft.cognitiveservices.speech.speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging` of the [SpeechConfig](/cpp/cognitive-services/speech/speechconfig) class instance.
+
+```cpp
+speechConfig->EnableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` property:
+
+```cpp
+string isAudioLoggingEnabled = speechConfig->GetProperty(PropertyId::SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [SpeechRecognizer](/cpp/cognitive-services/speech/speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechConfig](/java/api/com.microsoft.cognitiveservices.speech.speechconfig) class instance.
+
+```java
+speechConfig.enableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/java/api/com.microsoft.cognitiveservices.speech.propertyid):
+
+```java
+String isAudioLoggingEnabled = speechConfig.getProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [SpeechRecognizer](/java/api/com.microsoft.cognitiveservices.speech.speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechConfig](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig) class instance.
+
+```javascript
+speechConfig.enableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid):
+
+```javascript
+var SpeechSDK;
+SpeechSDK = speechSdk;
+// <...>
+string isAudioLoggingEnabled = speechConfig.getProperty(SpeechSDK.PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [SpeechRecognizer](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enable_audio_logging` of the [SpeechConfig](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig) class instance.
+
+```python
+speech_config.enable_audio_logging()
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid):
+
+```python
+import azure.cognitiveservices.speech as speechsdk
+# <...>
+is_audio_logging_enabled = speech_config.get_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_EnableAudioLogging)
+```
+
+Each [SpeechRecognizer](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechrecognizer) that uses this `speech_config` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging` of the [SPXSpeechConfiguration](/objectivec/cognitive-services/speech/spxspeechconfiguration) class instance.
+
+```objectivec
+[speechConfig enableAudioLogging];
+```
+
+To check whether logging is enabled, get the value of the `SPXSpeechServiceConnectionEnableAudioLogging` [property](/objectivec/cognitive-services/speech/spxpropertyid):
+
+```objectivec
+NSString *isAudioLoggingEnabled = [speechConfig getPropertyById:SPXSpeechServiceConnectionEnableAudioLogging];
+```
+
+Each [SpeechRecognizer](/objectivec/cognitive-services/speech/spxspeechrecognizer) that uses this `speechConfig` has audio and transcription logging enabled.
++
+#### Enable logging for speech translation with the Speech SDK
+
+For speech translation, only the audio and transcription of the original audio are logged. The translations aren't logged.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging()` of the [SpeechTranslationConfig](/dotnet/api/microsoft.cognitiveservices.speech.speechtranslationconfig) class instance.
+
+```csharp
+speechTranslationConfig.EnableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/dotnet/api/microsoft.cognitiveservices.speech.propertyid):
+
+```csharp
+string isAudioLoggingEnabled = speechTranslationConfig.GetProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [TranslationRecognizer](/dotnet/api/microsoft.cognitiveservices.speech.translation.translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `EnableAudioLogging` of the [SpeechTranslationConfig](/cpp/cognitive-services/speech/translation-speechtranslationconfig) class instance.
+
+```cpp
+speechTranslationConfig->EnableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` property:
+
+```cpp
+string isAudioLoggingEnabled = speechTranslationConfig->GetProperty(PropertyId::SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [TranslationRecognizer](/cpp/cognitive-services/speech/translation-translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechTranslationConfig](/java/api/com.microsoft.cognitiveservices.speech.translation.speechtranslationconfig) class instance.
+
+```java
+speechTranslationConfig.enableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/java/api/com.microsoft.cognitiveservices.speech.propertyid):
+
+```java
+String isAudioLoggingEnabled = speechTranslationConfig.getProperty(PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [TranslationRecognizer](/java/api/com.microsoft.cognitiveservices.speech.translation.translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging()` of the [SpeechTranslationConfig](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechtranslationconfig) class instance.
+
+```javascript
+speechTranslationConfig.enableAudioLogging();
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid):
+
+```javascript
+var SpeechSDK;
+SpeechSDK = speechSdk;
+// <...>
+string isAudioLoggingEnabled = speechTranslationConfig.getProperty(SpeechSDK.PropertyId.SpeechServiceConnection_EnableAudioLogging);
+```
+
+Each [TranslationRecognizer](/javascript/api/microsoft-cognitiveservices-speech-sdk/translationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enable_audio_logging` of the [SpeechTranslationConfig](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.translation.speechtranslationconfig) class instance.
+
+```python
+speech_translation_config.enable_audio_logging()
+```
+
+To check whether logging is enabled, get the value of the `SpeechServiceConnection_EnableAudioLogging` [property](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid):
+
+```python
+import azure.cognitiveservices.speech as speechsdk
+# <...>
+is_audio_logging_enabled = speech_translation_config.get_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_EnableAudioLogging)
+```
+
+Each [TranslationRecognizer](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.translation.translationrecognizer) that uses this `speech_translation_config` has audio and transcription logging enabled.
++
+To enable audio and transcription logging with the Speech SDK, you execute the method `enableAudioLogging` of the [SPXSpeechTranslationConfiguration](/objectivec/cognitive-services/speech/spxspeechtranslationconfiguration) class instance.
+
+```objectivec
+[speechTranslationConfig enableAudioLogging];
+```
+
+To check whether logging is enabled, get the value of the `SPXSpeechServiceConnectionEnableAudioLogging` [property](/objectivec/cognitive-services/speech/spxpropertyid):
+
+```objectivec
+NSString *isAudioLoggingEnabled = [speechTranslationConfig getPropertyById:SPXSpeechServiceConnectionEnableAudioLogging];
+```
+
+Each [TranslationRecognizer](/objectivec/cognitive-services/speech/spxtranslationrecognizer) that uses this `speechTranslationConfig` has audio and transcription logging enabled.
++
+#### Enable logging for speech-to-text REST API for short audio
+
+If you use [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) and want to enable audio and transcription logging, you need to use the query parameter and value `storeAudio=true` as a part of your REST request. A sample request looks like this:
+
+```http
+https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US&storeAudio=true
+```
+
+### Enable audio and transcription logging for a custom model endpoint
+
+This method is applicable for [Custom Speech](custom-speech-overview.md) endpoints only.
+
+Logging can be enabled or disabled in the persistent custom model endpoint settings. When logging is enabled (turned on) for a custom model endpoint, then you don't need to enable logging at the [recognition session level with the SDK or REST API](#enable-logging-for-a-single-recognition-session). Even when logging isn't enabled for a custom model endpoint, you can enable logging temporarily at the recognition session level with the SDK or REST API.
+
+> [!WARNING]
+> For custom model endpoints, the logging setting of your deployed endpoint is prioritized over your session-level setting (SDK or REST API). If logging is enabled for the custom model endpoint, the session-level setting (whether it's set to true or false) is ignored. If logging isn't enabled for the custom model endpoint, the session-level setting determines whether logging is active.
+
+You can enable audio and transcription logging for a custom model endpoint:
+- When you create the endpoint using the Speech Studio, REST API, or Speech CLI. For details about how to enable logging for a Custom Speech endpoint, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md#add-a-deployment-endpoint).
+- When you update the endpoint ([Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update)) using the [Speech-to-text REST API](rest-speech-to-text.md). For an example of how to update the logging setting for an endpoint, see [Turn off logging for a custom model endpoint](#turn-off-logging-for-a-custom-model-endpoint). But instead of setting the `contentLoggingEnabled` property to `false`, set it to `true` to enable logging for the endpoint.
+
+## Turn off logging for a custom model endpoint
+
+To disable audio and transcription logging for a custom model endpoint, you must update the persistent endpoint logging setting using the [Speech-to-text REST API](rest-speech-to-text.md). There isn't a way to disable logging for an existing custom model endpoint using the Speech Studio.
+
+To turn off logging for a custom endpoint, use the [Endpoints_Update](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_Update) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+
+- Set the `contentLoggingEnabled` property within `properties`. Set this property to `true` to enable logging of the endpoint's traffic. Set this property to `false` to disable logging of the endpoint's traffic.
+
+Make an HTTP PATCH request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, replace `YourEndpointId` with your endpoint ID, and set the request body properties as previously described.
+
+```azurecli-interactive
+curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
+ "properties": {
+ "contentLoggingEnabled": false
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "model": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/71b46720-995d-4038-a331-0317e9e7a02f"
+ },
+ "links": {
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=4ef91f9b-7ac9-4c3b-a238-581ef0f8b7e2"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/122fd2f7-1d3a-4404-885d-2b24a2a187e8"
+ },
+ "properties": {
+ "loggingEnabled": false
+ },
+ "lastActionDateTime": "2023-03-28T23:03:15Z",
+ "status": "Succeeded",
+ "createdDateTime": "2023-03-28T23:02:40Z",
+ "locale": "en-US",
+ "displayName": "My Endpoint",
+ "description": "My Endpoint Description"
+}
+```
+
+The response body should reflect the new setting. The name of the logging property in the response (`loggingEnabled`) is different from the name of the logging property that you set in the request (`contentLoggingEnabled`).
+
+## Get audio and transcription logs
+
+You can access audio and transcription logs using [Speech-to-text REST API](#get-audio-and-transcription-logs-with-speech-to-text-rest-api). For [custom model](how-to-custom-speech-deploy-model.md) endpoints, you can also use [Speech Studio](#get-audio-and-transcription-logs-with-speech-studio). See details in the following sections.
+
+> [!NOTE]
+> Logging data is kept for 30 days. After this period the logs are automatically deleted. However you can [delete](#delete-audio-and-transcription-logs) specific logs or a range of available logs at any time.
+
+### Get audio and transcription logs with Speech Studio
+
+This method is applicable for [custom model](how-to-custom-speech-deploy-model.md) endpoints only.
+
+To download the endpoint logs:
+
+1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
+1. Select **Custom Speech** > Your project name > **Deploy models**.
+1. Select the link by endpoint name.
+1. Under **Content logging**, select **Download log**.
+
+With this approach, you can download all available log sets at once. There's no way to download selected log sets in Speech Studio.
+
+### Get audio and transcription logs with Speech-to-text REST API
+
+You can download all or a subset of available log sets.
+
+This method is applicable for base and [custom model](how-to-custom-speech-deploy-model.md) endpoints. To list and download audio and transcription logs:
+- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language.
+- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint.
+
+### Get log IDs with Speech-to-text REST API
+
+In some scenarios, you may need to get IDs of the available logs. For example, you may want to delete a specific log as described [later in this article](#delete-specific-log).
+
+To get IDs of the available logs:
+- Base models: Use the [Endpoints_ListBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored when using the default base model of a given language.
+- Custom model endpoints: Use the [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md). This operation gets the list of audio and transcription logs that have been stored for a given endpoint.
+
+Here's a sample output of [Endpoints_ListLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListLogs). For simplicity, only one log set is shown:
+
+```json
+{
+ "values": [
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json",
+ "name": "163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9.v2.json",
+ "kind": "Transcription",
+ "properties": {
+ "size": 79920
+ },
+ "createdDateTime": "2023-03-13T16:37:15Z",
+ "links": {
+ "contentUrl": "<Link to download log file>"
+ }
+ },
+ {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_wav",
+ "name": "163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9.wav",
+ "kind": "Audio",
+ "properties": {
+ "size": 932966
+ },
+ "createdDateTime": "2023-03-13T16:37:15Z",
+ "links": {
+ "contentUrl": "<Link to to download log file>"
+ }
+ }
+ ]
+}
+```
+
+The locations of each audio and transcription log file are returned in the response body. See the corresponding `kind` property to determine whether the file includes the audio (`"kind": "Audio"`) or the transcription (`"kind": "Transcription"`).
+
+The log ID for each log file is the last part of the URL in the `"self"` element value. The log ID in the following example is `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json`.
+
+```json
+"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/files/logs/2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json"
+```
+
+## Delete audio and transcription logs
+
+Logging data is kept for 30 days. After this period, the logs are automatically deleted. However you can delete specific logs or a range of available logs at any time.
+
+For any base or [custom model](how-to-custom-speech-deploy-model.md) endpoint you can delete all available logs, logs for a given time frame, or a particular log based on its Log ID. The deletion process is done asynchronously and can take minutes, hours, one day, or longer depending on the number of log files.
+
+To delete audio and transcription logs you must use the [Speech-to-text REST API](rest-speech-to-text.md). There isn't a way to delete logs using the Speech Studio.
+
+### Delete all logs or logs for a given time frame
+
+To delete all logs or logs for a given time frame:
+
+- Base models: Use the [Endpoints_DeleteBaseModelLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
+- Custom model endpoints: Use the [Endpoints_DeleteLogs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLogs) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
+
+Optionally, set the `endDate` of the audio logs deletion (specific day, UTC). Expected format: "yyyy-mm-dd". For instance, "2023-03-15" results in deleting all logs on March 15, 2023 and before.
+
+### Delete specific log
+
+To delete a specific log by ID:
+
+- Base models: Use the [Endpoints_DeleteBaseModelLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteBaseModelLog) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
+- Custom model endpoints: Use the [Endpoints_DeleteLog](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_DeleteLog) operation of the [Speech-to-text REST API](rest-speech-to-text.md).
+
+For details about how to get Log IDs, see a previous section [Get log IDs with Speech-to-text REST API](#get-log-ids-with-speech-to-text-rest-api).
+
+Since audio and transcription logs have separate IDs (such as IDs `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_v2_json` and `2023-03-13_163715__0420c53d-e6ac-4857-bce0-f39c3f9f5ff9_wav` from a [previous example in this article](#get-log-ids-with-speech-to-text-rest-api)), when you want to delete both audio and transcription logs you execute separate [delete by ID](#delete-specific-log) requests.
+
+## Next steps
+
+* [Speech-to-text quickstart](get-started-speech-to-text.md)
+* [Speech translation quickstart](./get-started-speech-translation.md)
+* [Create and train custom speech models](custom-speech-overview.md)
cognitive-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
# Quickstart: Create a voice assistant with Custom Commands + In this quickstart, you create and test a basic Custom Commands application using Speech Studio. You will also be able to access this application from a Windows client app. ## Region Availability
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/voice-assistants.md
# What is a voice assistant?
-By using voice assistants with the Speech service, developers can create natural, human-like, conversational interfaces for their applications and experiences.
-
-The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses either [Direct Line Speech](direct-line-speech.md) (via Azure Bot Service) for adding voice capabilities to your bots or Custom Commands for voice-command scenarios.
+By using voice assistants with the Speech service, developers can create natural, human-like, conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant implementation.
## Choose an assistant solution
-The first step in creating a voice assistant is to decide what you want it to do. Speech service provides multiple, complementary solutions for crafting assistant interactions. For flexibility and versatility, you can add voice in and voice out capabilities to a bot by using Azure Bot Service with the [Direct Line Speech](direct-line-speech.md) channel, or you can simply author a [Custom Commands](custom-commands.md) app for more straightforward voice-command scenarios.
-
-| If you want... | Consider using... | Examples |
-|-||-|
-|Open-ended conversation with robust skills integration and full deployment control | Azure Bot Service bot with [Direct Line Speech](direct-line-speech.md) channel | <ul><li>"I need to go to Seattle"</li><li>"What kind of pizza can I order?"</li></ul>
-|Voice-command or simple task-oriented conversations with simplified authoring and hosting | [Custom Commands](custom-commands.md) | <ul><li>"Turn on the overhead light"</li><li>"Make it 5 degrees warmer"</li><li>More examples at [Speech Studio](https://aka.ms/speechstudio/customcommands)</li></ul>
+The first step in creating a voice assistant is to decide what you want it to do. Speech service provides multiple, complementary solutions for crafting assistant interactions. You might want your application to support an open-ended conversation with phrases such as "I need to go to Seattle" or "What kind of pizza can I order?" For flexibility and versatility, you can add voice in and voice out capabilities to a bot by using Azure Bot Service with the [Direct Line Speech](direct-line-speech.md) channel.
If you aren't yet sure what you want your assistant to do, we recommend [Direct Line Speech](direct-line-speech.md) as the best option. It offers integration with a rich set of tools and authoring aids, such as the [Virtual Assistant solution and enterprise template](/azure/bot-service/bot-builder-enterprise-template-overview) and the [QnA Maker service](../qnamaker/overview/overview.md), to build on common patterns and use your existing knowledge sources.
-If you want to keep it simpler for now, [Custom Commands](custom-commands.md) makes it easy to build rich, voice-command apps that are optimized for voice-first interaction. Custom Commands provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, all of which can help you focus on building the best solution for your voice-command scenario.
-
- ![Screenshot of a graph comparing the relative complexity and flexibility of the two voice assistant solutions.](media/voice-assistants/assistant-solution-comparison.png)
- ## Reference architecture for building a voice assistant by using the Speech SDK ![Conceptual diagram of the voice assistant orchestration service flow.](media/voice-assistants/overview.png) ## Core features
-Whether you choose [Direct Line Speech](direct-line-speech.md) or [Custom Commands](custom-commands.md) to create your assistant interactions, you can use a rich set of customization features to customize your assistant to your brand, product, and personality.
+Whether you choose [Direct Line Speech](direct-line-speech.md) or another solution to create your assistant interactions, you can use a rich set of customization features to customize your assistant to your brand, product, and personality.
| Category | Features | |-|-|
Whether you choose [Direct Line Speech](direct-line-speech.md) or [Custom Comman
## Get started with voice assistants
-We offer the following quickstart articles, organized by programming language, that are designed to have you running code in less than 10 minutes:
-
-* [Quickstart: Create a custom voice assistant by using Direct Line Speech](quickstarts/voice-assistants.md)
-* [Quickstart: Build a voice-command app by using Custom Commands](quickstart-custom-commands-application.md)
+We offer the following quickstart article that's designed to have you running code in less than 10 minutes: [Quickstart: Create a custom voice assistant by using Direct Line Speech](quickstarts/voice-assistants.md)
## Sample code and tutorials
Sample code for creating a voice assistant is available on GitHub. The samples c
* [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant) * [Tutorial: Voice-enable an assistant that's built by using Azure Bot Service with the C# Speech SDK](tutorial-voice-enable-your-bot-speech-sdk.md)
-* [Tutorial: Create a Custom Commands application with simple voice commands](./how-to-develop-custom-commands-application.md)
## Customization
Voice assistants that you build by using Speech service can use a full range of
## Next steps
-* [Learn more about Custom Commands](custom-commands.md)
* [Learn more about Direct Line Speech](direct-line-speech.md) * [Get the Speech SDK](speech-sdk.md)
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
<br><sup>2</sup> East US and West Europe were previously available, but due to high demand they are currently unavailable for new customers to use for fine-tuning. Please use US South Central region for fine-tuning. <br><sup>3</sup> Currently, only version `0301` of this model is available. This version of the model will be deprecated on 8/1/2023 in favor of newer version of the gpt-35-model. See [ChatGPT model versioning](../how-to/chatgpt.md#model-versioning) for more details. - ### GPT-4 Models These models can only be used with the Chat Completion API.
These models can only be used with Completions API requests.
<sup>1</sup> The model is available for fine-tuning by request only. Currently we aren't accepting new requests to fine-tune the model. - ### Embeddings Models These models can only be used with Embedding API requests.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
Previously updated : 03/27/2023 Last updated : 03/21/2023 recommendations: false keywords:
keywords:
## March 2023
-### Fine-tuned model change
-
-Deployed customized models (fine-tuned models) that are inactive for greater than 90 days will now automatically have their deployments deleted. **The underlying fine-tuned model is retained and can be redeployed at any time**. Once a fine-tuned model is deployed, it will continue to incur an hourly hosting cost regardless of whether you're actively using the model. To learn more about planning and managing costs with Azure OpenAI, refer to our [cost management guide](/azure/cognitive-services/openai/how-to/manage-costs#base-series-and-codex-series-fine-tuned-models).
-
-### New Features
- - **GPT-4 series models are now available in preview on Azure OpenAI**. To request access, existing Azure OpenAI customers can [apply by filling out this form](https://aka.ms/oai/get-gpt4). These models are currently available in the East US and South Central US regions. - **New Chat Completion API for ChatGPT and GPT-4 models released in preview on 3/21**. To learn more checkout the [updated quickstarts](./quickstart.md) and [how-to article](./how-to/chatgpt.md).
communication-services Email Attachment Allowed Mime Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-attachment-allowed-mime-types.md
+
+ Title: Allowed attachment types for sending email
+
+description: Learn about how validation for attachment MIME types works for Email Communication Services.
++++ Last updated : 03/24/2023+++++
+# Allowed attachment types for sending email in Azure Communication Services Email
+
+The [Send Email operation](../../quickstarts/email/send-email.md) allows the option for the sender to add attachments to an outgoing email. Along with the content itself, the sender must include the file attachment type using the MIME standard when making a request with an attachment. Many common file types are accepted, such as Word documents, Excel spreadsheets, many image and video formats, contacts, and calendar invites.
+
+## What is a MIME type?
+
+MIME (Multipurpose Internet Mail Extensions) types are a way of identifying the type of data that is being sent over the internet. When users send email requests with Azure Communication Services Email, they can specify the MIME type of the email content, which allows the recipient's email client to properly display and interpret the message. If an email message includes an attachment, the MIME type would be set to the appropriate file type (for example, "application/pdf" for a PDF document).
+
+Developers can ensure that the recipient's email client properly formats and interprets the email message by using MIME types, irrespective of the software or platform being used. This information helps to ensure that the email message is delivered correctly and that the recipient can access the content as intended. In addition, using MIME types can also help to improve the security of email communications, as they can be used to indicate whether an email message includes executable content or other potentially harmful elements.
+
+To sum up, MIME types are a critical component of email communication, and by using them with Azure Communication Services Email, developers can help ensure that their email messages are delivered correctly and securely.
+
+## Allowed attachment types
+
+Here's a table listing some of the most common supported file extensions and their corresponding MIME types for email attachments using Azure Communication Services Email:
+
+| File Extension | Description | MIME Type |
+| | | |
+| .3gp | 3GPP multimedia file | `video/3gpp` |
+| .3g2 | 3GPP2 multimedia file | `video/3gpp2` |
+| .7z | 7-Zip compressed file | `application/x-7z-compressed` |
+| .aac | AAC audio | `audio/aac` |
+| .avi | AVI video file | `video/x-msvideo` |
+| .bmp | BMP image | `image/bmp` |
+| .csv | Comma-separated values | `text/csv` |
+| .doc | Microsoft Word document (97-2003) | `application/msword` |
+| .docm | Microsoft Word macro-enabled document | `application/vnd.ms-word.document.macroEnabled.12` |
+| .docx | Microsoft Word document (2007 or later) | `application/vnd.openxmlformats-officedocument.wordprocessingml.document` |
+| .eot | Embedded OpenType font | `application/vnd.ms-fontobject` |
+| .epub | EPUB ebook file | `application/epub+zip` |
+| .gif | GIF image | `image/gif` |
+| .gz | Gzip compressed file | `application/gzip` |
+| .ico | Icon file | `image/vnd.microsoft.icon` |
+| .ics | iCalendar file | `text/calendar` |
+| .jpg, .jpeg | JPEG image | `image/jpeg` |
+| .json | JSON data | `application/json` |
+| .mid, .midi | MIDI audio file | `audio/midi` |
+| .mp3 | MP3 audio file | `audio/mpeg` |
+| .mp4 | MP4 video file | `video/mp4` |
+| .mpeg | MPEG video file | `video/mpeg` |
+| .oga | Ogg audio file | `audio/ogg` |
+| .ogv | Ogg video file | `video/ogg` |
+| .ogx | Ogg file | `application/ogg` |
+| .one | Microsoft OneNote file | `application/onenote` |
+| .opus | Opus audio file | `audio/opus` |
+| .otf | OpenType font | `font/otf` |
+| .pdf | PDF document | `application/pdf` |
+| .png | PNG image | `image/png` |
+| .ppsm | PowerPoint slideshow (macro-enabled) | `application/vnd.ms-powerpoint.slideshow.macroEnabled.12` |
+| .ppsx | PowerPoint slideshow | `application/vnd.openxmlformats-officedocument.presentationml.slideshow` |
+| .ppt | PowerPoint presentation (97-2003) | `application/vnd.ms-powerpoint` |
+| .pptm | PowerPoint macro-enabled presentation | `application/vnd.ms-powerpoint.presentation.macroEnabled.12` |
+| .pptx | PowerPoint presentation (2007 or later) | `application/vnd.openxmlformats-officedocument.presentationml.presentation` |
+| .pub | Microsoft Publisher document | `application/vnd.ms-publisher` |
+| .rar | RAR compressed file | `application/x-rar-compressed` |
+| .rpmsg | Outlook email message | `application/vnd.ms-outlook` |
+| .rtf | Rich Text Format document | `application/rtf` |
+| .svg | Scalable Vector Graphics image | `image/svg+xml` |
+| .tar | Tar archive file | `application/x-tar` |
+| .tif, .tiff | Tagged Image File Format | `image/tiff` |
+| .ttf | TrueType Font | `font/ttf` |
+| .txt | Text Document | `text/plain` |
+| .vsd | Microsoft Visio Drawing | `application/vnd.visio` |
+| .wav | Waveform Audio File Format | `audio/wav` |
+| .weba | WebM Audio File | `audio/webm` |
+| .webm | WebM Video File | `video/webm` |
+| .webp | WebP Image File | `image/webp` |
+| .wma | Windows Media Audio File | `audio/x-ms-wma` |
+| .wmv | Windows Media Video File | `video/x-ms-wmv` |
+| .woff | Web Open Font Format | `font/woff` |
+| .woff2 | Web Open Font Format 2.0 | `font/woff2` |
+| .xls | Microsoft Excel Spreadsheet (97-2003) | `application/vnd.ms-excel` |
+| .xlsb | Microsoft Excel Binary Spreadsheet | `application/vnd.ms-excel.sheet.binary.macroEnabled.12` |
+| .xlsm | Microsoft Excel Macro-Enabled Spreadsheet | `application/vnd.ms-excel.sheet.macroEnabled.12` |
+| .xlsx | Microsoft Excel Spreadsheet (OpenXML) | `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet` |
+| .xml | Extensible Markup Language File | `application/xml`, `text/xml` |
+| .zip | ZIP Archive | `application/zip` |
+
+There are many other file extensions and MIME types that can be used for email attachments. However, this list includes accepted types for sending attachments in our SendMail operation. Additionally, different email clients and servers may have different limitations or restrictions on file size and types that could result in the failure of email delivery. Ensure that the recipient can accept the email attachment or refer to the documentation for the recipient's email providers.
+
+## Additional information
+
+The Internet Assigned Numbers Authority (IANA) is a department of the Internet Corporation for Assigned Names and Numbers (ICANN) responsible for the global coordination of various Internet protocols and resources, including the management and registration of MIME types.
+
+The IANA maintains a registry of standardized MIME types, which includes a unique identifier for each MIME type, a short description of its purpose, and the associated file extensions. For the most up-to-date information regarding MIME types, including the definitive list of media types, it's recommended to visit the [IANA Website](https://www.iana.org/assignments/media-types/media-types.xhtml) directly.
+
+## Next steps
+
+* [What is Email Communication Communication Service](./prepare-email-communication-resource.md)
+
+* [Email domains and sender authentication for Azure Communication Services](./email-domain-and-sender-authentication.md)
+
+* [Get started with sending email using Email Communication Service in Azure Communication Service](../../quickstarts/email/send-email.md)
+
+* [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
+
+The following documents may be interesting to you:
+
+- Familiarize yourself with the [Email client library](../email/sdk-features.md)
+- How to send emails with custom verified domains? [Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)
+- How to send emails with Azure Managed Domains? [Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communications-gateway Monitor Azure Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md
You can analyze metrics for Azure Communications Gateway, along with metrics fro
For a list of the metrics collected, see [Monitoring Azure Communications Gateway data reference](monitoring-azure-communications-gateway-data-reference.md).
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
- ## Filtering and splitting All Azure Communications Gateway metrics support the **Region** dimension, allowing you to filter any metric by the Service Locations defined in your Azure Communications Gateway resource.
container-apps Client Certificate Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/client-certificate-authorization.md
+
+ Title: Configure client certificate authentication in Azure Container Apps
+description: How to configure client authentication in Azure Container Apps.
++++ Last updated : 03/29/2023+++
+# Configure client certificate authentication in Azure Container Apps
+
+Azure Container Apps supports client certificate authentication (also known as mutual TLS or mTLS) that allows access to your container app through two-way authentication. This article shows you how to configure client certificate authorization in Azure Container Apps.
+
+When client certificates are used, the TLS certificates are exchanged between the client and your container app to authenticate identity and encrypt traffic. Client certificates are often used in "zero trust" security models to authorize client access within an organization.
+
+For example, you may want to require a client certificate for a container app that manages sensitive data.
+
+Container Apps accepts client certificates in the PKCS12 format are that issued by a trusted certificate authority (CA), or are self-signed.
+
+## Configure client certificate authorization
+
+Set the `clientCertificateMode` property in your container app template to configure support of client certificates.
+
+The property can be set to one of the following values:
+
+- `require`: The client certificate is required for all requests to the container app.
+- `accept`: The client certificate is optional. If the client certificate isn't provided, the request is still accepted.
+- `ignore`: The client certificate is ignored.
+
+Ingress passes the client certificate to the container app if `require` or `accept` are set.
+
+The following ARM template example configures ingress to require a client certificate for all requests to the container app.
+
+```json
+{
+ "properties": {
+ "configuration": {
+ "ingress": {
+ "clientCertificateMode": "require"
+ }
+ }
+ }
+}
+```
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Configure ingress](ingress-how-to.md)
container-apps Communicate Between Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/communicate-between-microservices.md
zone_pivot_groups: container-apps-image-build-type
# Tutorial: Communication between microservices in Azure Container Apps
-Azure Container Apps exposes each container app through a domain name if [ingress](ingress.md) is enabled. Ingress endpoints for container apps within an external environment can be either publicly accessible or only available to other container apps in the same [environment](environment.md).
+Azure Container Apps exposes each container app through a domain name if [ingress](ingress-how-to.md) is enabled. Ingress endpoints for container apps within an external environment can be either publicly accessible or only available to other container apps in the same [environment](environment.md).
Once you know the fully qualified domain name for a given container app, you can make direct calls to the service from other container apps within the shared environment.
container-apps Connect Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-apps.md
# Connect applications in Azure Container Apps
-Azure Container Apps exposes each container app through a domain name if [ingress](ingress.md) is enabled. Ingress endpoints can be exposed either publicly to the world or internally and only available to other container apps in the same [environment](environment.md).
+Azure Container Apps exposes each container app through a domain name if [ingress](ingress-overview.md) is enabled. Ingress endpoints can be exposed either publicly to the world and to other container apps in the same environment, or ingress can be limited to only other container apps in the same [environment](environment.md).
-Once you know a container app's domain name, then you can call the location within your application code to connect multiple container apps together.
+You can call other container apps in the same environment from your application code using one of the following methods:
+
+- default fully qualified domain name (FQDN)
+- a custom domain name
+- the container app name
+- a Dapr URL
> [!NOTE]
-> When you call another container in the same environment using the FQDN, the network traffic never leaves the environment.
+> When you call another container in the same environment using the FQDN or app name, the network traffic never leaves the environment.
A sample solution showing how you can call between containers using both the FQDN Location or Dapr can be found on [Azure Samples](https://github.com/Azure-Samples/container-apps-connect-multiple-apps)
The following diagram shows how these values are used to compose a container app
## Dapr location
-Developing microservices often requires you to implement patterns common to distributed architecture. Dapr allows you to secure microservices with mutual TLS, trigger retries when errors occur, and take advantage of distributed tracing when Azure Application Insights is enabled.
+Developing microservices often requires you to implement patterns common to distributed architecture. Dapr allows you to secure microservices with mutual TLS (client certificates), trigger retries when errors occur, and take advantage of distributed tracing when Azure Application Insights is enabled.
A microservice that uses Dapr is available through the following URL pattern:
container-apps Ingress How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-how-to.md
+
+ Title: Configure Ingress for your app in Azure Container Apps
+description: How to configure ingress for your container app
++++ Last updated : 03/28/2023++
+zone_pivot_groups: arm-azure-cli-portal
++
+# Configure Ingress for your app in Azure Container Apps
+
+This article shows you how to enable [ingress](ingress-overview.md) features for your container app. Ingress is an application-wide setting. Changes to ingress settings apply to all revisions simultaneously, and don't generate new revisions.
+
+## Ingress settings
+
+You can set the following ingress template properties:
+
+| Property | Description | Values | Required |
+|||||
+| `allowInsecure` | Allows insecure traffic to your container app. When set to `true` HTTP requests to port 80 aren't automatically redirected to port 443 using HTTPS, allowing insecure connections.| `false` (default), `true` enables insecure connections| No |
+| `clientCertificateMode` | Client certificate mode for mTLS authentication. Ignore indicates server drops client certificate on forwarding. Accept indicates server forwards client certificate but doesn't require a client certificate. Require indicates server requires a client certificate. | `Required`, `Accept`, `Ignore` (default) | No |
+| `customDomains` | Custom domain bindings for Container Apps' hostnames. See [Custom domains and certificates](custom-domains-certificates.md) | An array of bindings | No |
+| `exposedPort` | (TCP ingress only) The port TCP listens on. If `external` is `true`, the value must be unique in the Container Apps environment. | A port number from `1` to `65535`. (can't be `80` or `443`) | No |
+| `external` | Allow ingress to your app from outside its Container Apps environment. |`true` or `false`(default) | Yes |
+| `ipSecurityRestrictions` | IP ingress restrictions. See [Set up IP ingress restrictions](ip-restrictions.md) | An array of rules | No |
+| `stickySessions.affinity` | Enables [session affinity](sticky-sessions.md). | `none` (default), `sticky` | No |
+| `targetPort` | The port your container listens to for incoming requests. | Set this value to the port number that your container uses. For HTTP ingress, your application ingress endpoint is always exposed on port `443`. | Yes |
+| `traffic` | [Traffic splitting](traffic-splitting.md) weights split between revisions. | An array of rules | No |
+| `transport` | The transport protocol type. | auto (default) detects HTTP/1 or HTTP/2, `http` for HTTP/1, `http2` for HTTP/2, `tcp` for TCP. | No |
++
+## Enable ingress
+
+You can configure ingress for your container app using the Azure CLI, an ARM template, or the Azure portal.
++
+# [Azure CLI](#tab/azure-cli)
+
+This `az containerapp ingress enable` command enables ingress for your container app. You must specify the target port, and you can optionally set the exposed port if your transport type is `tcp`.
+
+```azurecli
+az containerapp ingress enable \
+ --name <app-name> \
+ --resource-group <resource-group> \
+ --target-port <target-port> \
+ --exposed-port <tcp-exposed-port> \
+ --transport <transport> \
+ --type <external>
+ --allow-insecure
+```
+
+`az containerapp ingress enable` ingress arguments:
+
+| Option | Property | Description | Values | Required |
+| | | | | |
+| `--type` | external | Allow ingress to your app from anywhere, or limit ingress to its internal
+ Container Apps environment. | `external` or `internal` | Yes |
+|`--allow-insecure` | allowInsecure | Allow HTTP connections to your app. | | No |
+| `--target-port` | targetPort | The port your container listens to for incoming requests. | Set this value to the port number that your container uses. Your application ingress endpoint is always exposed on port `443`. | Yes |
+|`--exposed-port` | exposedPort | (TCP ingress only) An port for TCP ingress. If `external` is `true`, the value must be unique in the Container Apps environment if ingress is external. | A port number from `1` to `65535`. (can't be `80` or `443`) | No |
+|`--transport` | transport | The transport protocol type. | auto (default) detects HTTP/1 or HTTP/2, `http` for HTTP/1, `http2` for HTTP/2, `tcp` for TCP. | No |
+++
+# [Portal](#tab/portal)
+
+Enable ingress for your container app by using the portal.
+
+You can enable ingress when you create your container app, or you can enable ingress for an existing container app.
+- To configure ingress when you create your container app, select **Ingress** from the **App Configuration** tab of the container app creation wizard.
+- To configure ingress for an existing container app, select **Ingress** from the **Settings** menu of the container app resource page.
+
+### Enabling ingress for your container app:
+
+You can configure ingress when you create your container app by using the Azure portal.
++
+1. Set **Ingress** to **Enabled**.
+1. Configure the ingress settings for your container app.
+1. Select **Limited to Container Apps Environment** for internal ingress or **Accepting traffic from anywhere** for external ingress.
+1. Select the **Ingress Type**: **HTTP** or **TCP** (TCP ingress is only available in environments configured with a custom VNET).
+1. If *HTTP* is selected for the **Ingress Type**, select the **Transport**: **Auto**, **HTTP/1** or **HTTP/2**.
+1. Select **Insecure connections** if you want to allow HTTP connections to your app.
+1. Enter the **Target port** for your container app.
+1. If you have selected **TCP** for the **Transport** option, enter the **Exposed port** for your container app. The exposed port number can be `1` to `65535`. (can't be `80` or `443`)
+
+The **Ingress** settings page for your container app also allows you to configure **IP Restrictions**. For information to configure IP restriction, see [IP Restrictions](ip-restrictions.md).
+++
+# [ARM template](#tab/arm-template)
+
+Enable ingress for your container app by using the `ingress` configuration property. Set the `external` property to `true`, and set your `transport` and `targetPort` properties.
+-`external` property can be set to *true* for external or *false* for internal ingress.
+- Set the `transport` to `auto` to detect HTTP/1 or HTTP/2, `http` for HTTP/1, `http2` for HTTP/2, or `tcp` for TCP.
+- Set the `targetPort` to the port number that your container uses. Your application ingress endpoint is always exposed on port `443`.
+- Set the `exposedPort` property if transport type is `tcp` to a port for TCP ingress. The value must be unique in the Container Apps environment if ingress is external. A port number from `1` to `65535`. (can't be `80` or `443`)
+
+```json
+{
+ ...
+ "configuration": {
+ "ingress": {
+ "external": true,
+ "transport": "tcp",
+ "targetPort": 80,
+ "exposedPort": 8080,
+ },
+ }
+}
+```
+++++
+## Disable ingress
+
+# [Azure CLI](#tab/azure-cli)
+
+Disable ingress for your container app by using the `az containerapp ingress` command.
+
+```azurecli
+az containerapp ingress disable \
+ --name <app-name> \
+ --resource-group <resource-group> \
+```
+++
+# [Portal](#tab/portal)
+
+You can disable ingress for your container app using the portal.
+
+1. Select **Ingress** from the **Settings** menu of the container app page.
+1. Deselect the **Ingress** **Enabled** setting.
+1. Select **Save**.
++++
+# [ARM template](#tab/arm-template)
+
+Disable ingress for your container app by omitting the `ingress` configuration property from `properties.configuration` entirely.
++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Ingress in Azure Container Apps](ingress-overview.md)
container-apps Ingress Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md
+
+ Title: Ingress in Azure Container Apps
+description: Ingress options for Azure Container Apps
++++ Last updated : 03/29/2023+++
+# Ingress in Azure Container Apps
+
+Azure Container Apps allows you to expose your container app to the public web, your virtual network (VNET), and other container apps within your environment by enabling ingress. Ingress settings are enforced through a set of rules that control the routing of external and internal traffic to your container app. When you enable ingress, you don't need to create an Azure Load Balancer, public IP address, or any other Azure resources to enable incoming HTTP requests or TCP traffic.
+
+Ingress supports:
+
+- [External and internal ingress](#external-and-internal-ingress)
+- [HTTP and TCP ingress types](#protocol-types)
+- [Domain names](#domain-names)
+- [IP restrictions](#ip-restrictions)
+- [Authentication](#authentication)
+- [Traffic splitting between revisions](#traffic-splitting)
+- [Session affinity](#session-affinity)
+
+Example ingress configuration showing ingress split between two revisions:
++
+For configuration details, see [Configure ingress](ingress-how-to.md).
+
+## External and internal ingress
+
+When you enable ingress, you can choose between two types of ingress:
+
+- External: Accepts traffic from both the public internet and your container app's internal environment.
+- Internal: Allows only internal access from within your container app's environment.
+
+Each container app within an environment can be configured with different ingress settings. For example, in a scenario with multiple microservice apps, to increase security you may have a single container app that receives public requests and passes the requests to a background service. In this scenario, you would configure the public-facing container app with external ingress and the internal-facing container app with internal ingress.
+
+## Protocol types
+
+Container Apps supports two protocols for ingress: HTTP and TCP.
+
+### HTTP
+
+With HTTP ingress enabled, your container app has:
+
+- Support for TLS termination
+- Support for HTTP/1.1 and HTTP/2
+- Support for WebSocket and gRPC
+- HTTPS endpoints that always use TLS 1.2, terminated at the ingress point
+- Endpoints that expose ports 80 (for HTTP) and 443 (for HTTPS)
+ - By default, HTTP requests to port 80 are automatically redirected to HTTPS on 443
+- A fully qualified domain name (FQDN)
+- Request timeout is 240 seconds
+
+#### HTTP headers
+
+HTTP ingress adds headers to pass metadata about the client request to your container app. For example, the `X-Forwarded-Proto` header is used to identify the protocol that the client used to connect with the Container Apps service. The following table lists the HTTP headers that are relevant to ingress in Container Apps:
+
+| Header | Description | Values |
+||||
+| `X-Forwarded-Proto` | Protocol used by the client to connect with the Container Apps service. | `http` or `https` |
+| `X-Forwarded-For` | The IP address of the client that sent the request. | |
+| `X-Forwarded-Host` | The host name the client used to connect with the Container Apps service. | |
+
+### <a name="tcp"></a>TCP (preview)
+
+Container Apps supports TCP-based protocols other than HTTP or HTTPS. For example, you can use TCP ingress to expose a container app that uses the [Redis protocol](https://redis.io/topics/protocol).
+
+> [!NOTE]
+> TCP ingress is in public preview and is only supported in Container Apps environments that use a [custom VNET](vnet-custom.md).
+
+With TCP ingress enabled, your container app:
+
+- Is accessible to other container apps in the same environment via its name (defined by the `name` property in the Container Apps resource) and exposed port number.
+- Is accessible externally via its fully qualified domain name (FQDN) and exposed port number if the ingress is set to "external".
+
+## Domain names
+
+You can access your app in the following ways:
+
+- The default fully qualified domain name (FQDN): Each app in a Container Apps environment is automatically assigned an FQDN based on the environment's DNS suffix. To customize an environment's DNS suffix, see [Custom environment DNS Suffix](environment-custom-dns-suffix.md).
+- A custom domain name: You can configure a custom DNS domain for your Container Apps environment. For more information, see [Custom domain names and certificates](./custom-domains-certificates.md).
+- The app name: You can use the app name for communication between apps in the same environment.
+
+To get the FQDN for your app, see [Location](connect-apps.md#location).
+
+## IP restrictions
+
+Container Apps supports IP restrictions for ingress. You can create rules to either configure IP addresses that are allowed or denied access to your container app. For more information, see [Configure IP restrictions](ip-restrictions.md).
+
+## Authentication
+
+Azure Container Apps provides built-in authentication and authorization features to secure your external ingress-enabled container app. For more information, see [Authentication and authorization in Azure Container Apps](authentication.md).
+
+You can configure your app to support client certificates (mTLS) for authentication and traffic encryption. For more information, see [Configure client certificates](client-certificate-authorization.md)
++
+## Traffic splitting
+
+Containers Apps allows you to split incoming traffic between active revisions. When you define a splitting rule, you assign the percentage of inbound traffic to go to different revisions. For more information, see [Traffic splitting](traffic-splitting.md).
+
+## Session affinity
+
+Session affinity, also known as sticky sessions, is a feature that allows you to route all HTTP requests from a client to the same container app replica. This feature is useful for stateful applications that require a consistent connection to the same replica. For more information, see [Session affinity](sticky-sessions.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure ingress](ingress-how-to.md)
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
- Title: Set up HTTPS or TCP ingress in Azure Container Apps
-description: Enable public and private endpoints in your app with Azure Container Apps
---- Previously updated : 11/28/2022----
-# Set up HTTPS or TCP ingress in Azure Container Apps
-
-Azure Container Apps allows you to expose your container app to the public web, to your VNET, or to other container apps within your environment by enabling ingress. When you enable ingress, you don't need to create an Azure Load Balancer, public IP address, or any other Azure resources to enable incoming HTTPS requests.
-
-Each container app can be configured with different ingress settings. For example, you can have one container app that is exposed to the public web and another that is only accessible from within your Container Apps environment.
-
-## Ingress types
-
-Azure Container Apps supports two types of ingress: HTTPS and TCP.
-
-### HTTPS
-
-With HTTPS ingress enabled, your container app features the following characteristics:
--- Supports TLS termination-- Supports HTTP/1.1 and HTTP/2-- Supports WebSocket and gRPC-- HTTPS endpoints always use TLS 1.2, terminated at the ingress point-- Endpoints always expose ports 80 (for HTTP) and 443 (for HTTPS)
- - By default, HTTP requests to port 80 are automatically redirected to HTTPS on 443
-- The container app is accessed via its fully qualified domain name (FQDN)-- Request timeout is 240 seconds-
-### <a name="tcp"></a>TCP (preview)
-
-TCP ingress is useful for exposing container apps that use a TCP-based protocol other than HTTP or HTTPS.
-
-> [!NOTE]
-> TCP ingress is in public preview and is only supported in Container Apps environments that use a [custom VNET](vnet-custom.md).
-
-With TCP ingress enabled, your container app features the following characteristics:
--- The container app is accessed via its fully qualified domain name (FQDN) and exposed port number-- Other container apps in the same environment can also access a TCP ingress-enabled container app by using its name (defined by the `name` property in the Container Apps resource) and exposed port number-
-## Configuration
-
-Ingress is an application-wide setting. Changes to ingress settings apply to all revisions simultaneously, and don't generate new revisions.
-
-The ingress configuration section has the following form:
-
-```json
-{
- ...
- "configuration": {
- "ingress": {
- "external": true,
- "targetPort": 80,
- "transport": "auto"
- }
- }
-}
-```
-
-The following settings are available when configuring ingress:
-
-| Property | Description | Values | Required |
-|||||
-| `external` | Whether your ingress-enabled app is accessible outside its Container Apps environment. |`true` for visibility from internet or VNET, depending on app environment endpoint configured, `false` for visibility within app environment only. (default) | Yes |
-| `targetPort` | The port your container listens to for incoming requests. | Set this value to the port number that your container uses. Your application ingress endpoint is always exposed on port `443`. | Yes |
-| `exposedPort` | (TCP ingress only) The port used to access the app. If `external` is `true`, the value must be unique in the Container Apps environment and cannot be `80` or `443`. | A port number from `1` to `65535`. | No |
-| `transport` | The transport type. | `http` for HTTP/1, `http2` for HTTP/2, `auto` to automatically detect HTTP/1 or HTTP/2 (default), `tcp` for TCP. | No |
-| `allowInsecure` | Allows insecure traffic to your container app. | `false` (default), `true`<br><br>If set to `true`, HTTP requests to port 80 aren't automatically redirected to port 443 using HTTPS, allowing insecure connections. | No |
-
-> [!NOTE]
-> To disable ingress for your application, omit the `ingress` configuration property entirely.
-
-## Fully qualified domain name
-
-With ingress enabled, your application is assigned a fully qualified domain name (FQDN). The domain name takes the following forms:
-
-|Ingress visibility setting | Fully qualified domain name |
-|||
-| External | `<APP_NAME>.<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`|
-| Internal | `<APP_NAME>.internal.<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io` |
-
-For HTTP ingress, traffic is routed to individual applications based on the FQDN in the host header.
-
-For TCP ingress, traffic is routed to individual applications based on the FQDN and its *exposed* port number. Other container apps in the same environment can also access a TCP ingress-enabled container app by using its name (defined by the container app's `name` property) and its *exposedPort* number.
-
-For applications with external ingress visibility, the following conditions apply:
-- An internal Container Apps environment has a single private IP address for applications. For container apps in internal environments, you must configure [DNS](./networking.md#dns) for VNET-scope ingress.-- An external Container Apps environment or Container Apps environment that is not in a VNET has a single public IP address for applications.-
-You can get access to the environment's unique identifier by querying the environment settings.
--
-## <a name="ip-access-restrictions"></a>Inbound access restrictions by IP address ranges (preview)
-
-By default, ingress doesn't filter traffic. You can add restrictions to limit access based on IP addresses. There are two ways to filter traffic:
-
-* **Allowlist**: Deny all inbound traffic, but allow access from a list of IP address ranges
-* **Denylist**: Allow all inbound traffic, but deny access from a list of IP address ranges
-
-> [!NOTE]
-> If defined, all rules must be the same type. You cannot combine allow rules and deny rules.
->
-> IPv4 addresses are supported. Define each IPv4 address block in Classless Inter-Domain Routing (CIDR) notation. To learn more about CIDR notation, see [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
-
-### Configure an allowlist
-
-To allow inbound traffic from a specified IP range, run the following Azure CLI command.
-
-```azurecli
-az containerapp ingress access-restriction set \
- --name MyContainerapp \
- --resource-group MyResourceGroup \
- --rule-name restrictionName \
- --ip-address 192.168.1.1/28 \
- --description "Restriction description." \
- --action Allow
-```
-
-Add more allow rules by repeating the command with a different IP address range in the `--ip-address` parameter. When you configure one or more allow rules, only traffic that matches at least one rule is allowed. All other traffic is denied.
-
-### Configure a denylist
-
-To deny inbound traffic from a specified IP range, run the following Azure CLI command.
-
-```azurecli
-az containerapp ingress access-restriction set \
- --name MyContainerapp \
- --resource-group MyResourceGroup \
- --rule-name my-restriction \
- --ip-address 192.168.1.1/28 \
- --description "Restriction description."
- --action Deny
-```
-
-Add more deny rules by repeating the command with a different IP address range in the `--ip-address` parameter. When you configure one or more deny rules, any traffic that matches at least one rule is denied. All other traffic is allowed.
-
-### Remove access restrictions
-
-To remove an access restriction, run the following Azure CLI command.
-
-```azurecli
-az containerapp ingress access-restriction remove
- --name MyContainerapp \
- --resource-group MyResourceGroup \
- --rule-name my-restriction
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Manage scaling](scale-app.md)
container-apps Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ip-restrictions.md
+
+ Title: Set up IP ingress restrictions in Azure Container Apps
+description: Enable IP restrictions to limit access to your app with Azure Container Apps.
++++ Last updated : 03/28/2023+
+zone_pivot_groups: azure-cli-or-portal
++
+# Set up IP ingress restrictions in Azure Container Apps
+
+Azure Container Apps allows you to limit inbound traffic to your container app by configuring IP ingress restrictions via ingress configuration.
+
+There are two types of restrictions:
+
+* *Allow*: Allow inbound traffic only from address ranges you specify in allow rules.
+* *Deny*: Deny all inbound traffic only from address ranges you specify in deny rules.
+
+when no IP restriction rules are defined, all inbound traffic is allowed.
+
+IP restrictions rules contain the following properties:
+
+| Property | Value | Description |
+|-|-|-|
+| name | string | The name of the rule. |
+| description | string | A description of the rule. |
+| ipAddressRange | IP address range in CIDR format | The IP address range in CIDR notation. |
+| action | Allow or Deny | The action to take for the rule. |
+
+The `ipAddressRange` parameter accepts IPv4 addresses. Define each IPv4 address block in [Classless Inter-Domain Routing (CIDR)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
+
+> [!NOTE]
+> All rules must be the same type. You cannot combine allow rules and deny rules.
+
+## Manage IP ingress restrictions
+
+You can manage IP access restrictions rules through the Azure portal or Azure CLI.
++
+### Add rules
+
+1. Go to your container app in the Azure portal.
+1. Select **Ingress** from the left side menu.
+1. Select the **IP Security Restrictions Mode** toggle to enable IP restrictions. You can choose to allow or deny traffic from the specified IP address ranges.
+1. Select **Add** to create the rule.
+
+ :::image type="content" source="media/ingress/screenshot-ingress-page-ip-restrictions.png" alt-text="Screenshot of IP restriction settings on container app Ingress page.":::
+
+1. Enter values in the following fields:
+
+ | Field | Description |
+ |-|-|
+ |**IPv4 address or range**|Enter the IP address or range of IP addresses in CIDR notation. For example, to allow access from a single IP address, use the following format: *10.200.10.2/32*.|
+ |**Name**|Enter a name for the rule.|
+ |**Description**|Enter a description for the rule.|
+
+1. Select **Add**.
+1. Repeat steps 4-6 to add more rules.
+1. When you have finished adding rules, select **Save**.
+ :::image type="content" source="media/ingress/screenshot-save-ip-restriction.png" alt-text="Screenshot to save IP restrictions on container app Ingress page.":::
+
+### Update a rule
+
+1. Go to your container app in the Azure portal.
+1. Select **Ingress** from the left side menu.
+1. Select the rule you want to update.
+1. Change the rule settings.
+1. Select **Save** to save the updates.
+1. Select **Save** on the Ingress page to save the updated rules.
+
+### Delete a rule
+
+1. Go to your container app in the Azure portal.
+1. Select **Ingress** from the left side menu.
+1. Select the delete icon next to the rule you want to delete.
+1. Select **Save**.
+++
+You can manage IP Access Restrictions using the `az containerapp ingress access-restriction` command group. This command group has the options to:
+
+- `set`: Create or update a rule.
+- `remove`: Delete a rule.
+- `list`: List all rules.
+
+### Create or update rules
+
+You can create or update IP restrictions using the `az containerapp ingress access-restriction set` command.
+
+The `az containerapp ingress access-restriction set` command group uses the following parameters.
+
+| Argument | Values | Description |
+|-|--|-|
+| `--rule-name` (required) | String | Specifies the name of the access restriction rule. |
+| `--description` | String | Specifies a description for the access restriction rule. |
+| `--action` (required) | Allow, Deny | Specifies whether to allow or deny access from the specified IP address range. |
+| `--ip-address` (required) | IP address or range of IP addresses in CIDR notation | Specifies the IP address range to allow or deny. |
+
+Add more rules by repeating the command with a different `--rule-name` and -`--ip-address` values.
+++
+#### Create allow rules
+
+The following example `az containerapp access-restriction set` command creates a rule to restrict inbound access to an IP address range. You must delete any existing deny rules before you can add any allow rules.
+
+Replace the values in the following example with your own values.
+
+```azurecli
+az containerapp ingress access-restriction set \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --rule-name "my allow rule" \
+ --description "example of rule allowing access" \
+ --ip-address 192.168.0.1/28 \
+ --action Allow
+```
+
+You can add to the allow rules by repeating the command with a different `--ip-address` and `--rule-name` values.
+
+#### Create deny rules
+
+The following example of the `az containerapp access-restriction set` command creates an access rule to deny inbound traffic from a specified IP range. You must delete any existing allow rules before you can add deny rules.
+
+Replace the placeholders in the following example with your own values.
+
+```azurecli
+az containerapp ingress access-restriction set \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --rule-name "my deny rule" \
+ --description "example of rule denying access" \
+ --ip-address 192.168.0.100/28 \
+ --action Deny
+```
+
+You can add to the deny rules by repeating the command with a different `--ip-address` and `--rule-name` values. If you use a rule name that already exists, the existing rule is updated.
+
+### Update a rule
+
+You can update a rule using the `az containerapp ingress access-restriction set` command. You can change the IP address range and the rule description, but not the rule name or action.
+
+The `--action` parameter is required, but you can't change the action from Allow to Deny or vice versa.
+If you omit the `description` parameter, the description is deleted.
+
+The following example updates the ip address range.
+
+```azurecli
+az containerapp ingress access-restriction set \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --rule-name "my deny rule" \
+ --ip-address 192.168.0.1/24 \
+ --description "example of rule denying access" \
+ --action Deny
+```
+
+### Remove access restrictions
+
+The following example `az containerapp ingress access-restriction remove` command removes a rule.
+
+```azurecli
+az containerapp ingress access-restriction list
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --rule-name "<your rule name>"
+```
+
+### List access restrictions
+
+The following example `az containerapp ingress access-restriction list` command lists the IP restriction rules for the container app.
+
+```azurecli
+az containerapp ingress access-restriction list
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP>
+```
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure Ingress](ingress-how-to.md)
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
Title: Networking architecture in Azure Container Apps
-description: Learn how to configure virtual networks in Azure Container Apps
+description: Learn how to configure virtual networks in Azure Container Apps.
As you create a custom VNET, keep in mind the following situations:
- If you want your container app to restrict all outside access, create an [internal Container Apps environment](vnet-custom-internal.md). -- When you provide your own VNET, you need to provide a subnet that is dedicated to the Container App Environment you'll deploy. This subnet can't be used by other services.
+- When you provide your own VNET, you need to provide a subnet that is dedicated to the Container App environment you deploy. This subnet can't be used by other services.
- Network addresses are assigned from a subnet range you define as the environment is created.
https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-container-apps-v
## HTTP edge proxy behavior
-Azure Container Apps uses [Envoy proxy](https://www.envoyproxy.io/) as an edge HTTP proxy. TLS is terminated on the edge and requests are routed based on their traffic split rules and routes traffic to the correct application.
+Azure Container Apps uses [Envoy proxy](https://www.envoyproxy.io/) as an edge HTTP proxy. TLS is terminated on the edge and requests are routed based on their traffic splitting rules and routes traffic to the correct application.
HTTP applications scale based on the number of HTTP requests and connections. Envoy routes internal traffic inside clusters. Downstream connections support HTTP1.1 and HTTP2 and Envoy automatically detects and upgrades the connection should the client connection be upgraded. Upstream connection is defined by setting the `transport` property on the [ingress](azure-resource-manager-api-spec.md#propertiesconfiguration) object.
Under the [ingress](azure-resource-manager-api-spec.md#propertiesconfiguration)
- **Accessibility level**: You can set your container app as externally or internally accessible in the environment. An environment variable `CONTAINER_APP_ENV_DNS_SUFFIX` is used to automatically resolve the FQDN suffix for your environment. -- **Traffic split rules**: You can define traffic split rules between different revisions of your application.
+- **Traffic split rules**: You can define traffic splitting rules between different revisions of your application. For more information, see [Traffic splitting](traffic-splitting.md).
+
+For more information about ingress configuration, see [Ingress in Azure Container Apps](ingress-overview.md).
### Scenarios
-The following scenarios describe configuration settings for common use cases.
-
-#### Rapid iteration
-
-In situations where you're frequently iterating development of your container app, you can set traffic rules to always shift all traffic to the latest deployed revision.
-
-The following example routes all traffic to the latest deployed revision:
-
-```json
-"ingress": {
- "traffic": [
- {
- "latestRevision": true,
- "weight": 100
- }
- ]
-}
-```
-
-Once you're satisfied with the latest revision, you can lock traffic to that revision by updating the `ingress` settings to:
-
-```json
-"ingress": {
- "traffic": [
- {
- "latestRevision": false, // optional
- "revisionName": "myapp--knowngoodrevision",
- "weight": 100
- }
- ]
-}
-```
-
-#### Update existing revision
-
-Consider a situation where you have a known good revision that's serving 100% of your traffic, but you want to issue an update to your app. You can deploy and test new revisions using their direct endpoints without affecting the main revision serving the app.
-
-Once you're satisfied with the updated revision, you can shift a portion of traffic to the new revision for testing and verification.
-
-The following configuration demonstrates how to move 20% of traffic over to the updated revision:
-
-```json
-"ingress": {
- "traffic": [
- {
- "revisionName": "myapp--knowngoodrevision",
- "weight": 80
- },
- {
- "revisionName": "myapp--newerrevision",
- "weight": 20
- }
- ]
-}
-```
-
-#### Staging microservices
-
-When building microservices, you may want to maintain production and staging endpoints for the same app. Use labels to ensure that traffic doesn't switch between different revisions.
-
-The following example demonstrates how to apply labels to different revisions.
-
-```json
-"ingress": {
- "traffic": [
- {
- "revisionName": "myapp--knowngoodrevision",
- "weight": 100
- },
- {
- "revisionName": "myapp--98fdgt",
- "weight": 0,
- "label": "staging"
- }
- ]
-}
-```
+For more information about scenarios, see [Ingress in Azure Container Apps](ingress-overview.md).
## Portal dependencies For every app in Azure Container Apps, there are two URLs.
-The first URL is generated by Container Apps and is used to access your app. See the *Application Url* in the *Overview* window of your container app in the Azure portal for the fully qualified domain name (FQDN) of your container app.
+Container Apps generates the first URL, which is used to access your app. See the *Application Url* in the *Overview* window of your container app in the Azure portal for the fully qualified domain name (FQDN) of your container app.
The second URL grants access to the log streaming service and the console. If necessary, you may need to add `https://azurecontainerapps.dev/` to the allowlist of your firewall or proxy.
IP addresses are broken down into the following types:
| Internal load balancer IP address | This address only exists in an internal deployment. | | App-assigned IP-based TLS/SSL addresses | These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured. |
-## Restrictions
+## Subnet Address Range Restrictions
Subnet address ranges can't overlap with the following reserved ranges:
If you're using the Azure CLI and the [platformReservedCidr](vnet-custom-interna
There's no forced tunneling in Container Apps routes. ## DNS-- **Custom DNS**: If your VNET uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. If you don't use the Azure recursive resolvers, the Container Apps environment won't function. -- **VNET-scope ingress**: If you plan to use VNET-scope [ingress](./ingress.md#configuration) in an internal Container Apps environment, configure your domains in one of the following ways:
+- **Custom DNS**: If your VNET uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. If you don't use the Azure recursive resolvers, the Container Apps environment can't function.
+
+- **VNET-scope ingress**: If you plan to use VNET-scope [ingress](ingress-overview.md) in an internal Container Apps environment, configure your domains in one of the following ways:
1. **Non-custom domains**: If you don't plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App EnvironmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record. The A record contains the name `*<DNS Suffix>` and the static IP address of the Container Apps environment.
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
Title: Azure Container Apps overview
-description: Learn about common scenarios and uses for Azure Container Apps
+description: Learn about common scenarios and uses for Azure Container Apps.
With Azure Container Apps, you can:
- [**Autoscale**](scale-app.md) your apps based on any KEDA-supported scale trigger. Most applications can scale to zero<sup>1</sup>. -- [**Enable HTTPS ingress**](ingress.md) without having to manage other Azure infrastructure.
+- [**Enable HTTPS or TCP ingress**](ingress-how-to.md) without having to manage other Azure infrastructure.
- [**Split traffic**](revisions.md) across multiple versions of an application for Blue/Green deployments and A/B testing scenarios.
With Azure Container Apps, you can:
- [**Monitor logs**](log-monitoring.md) using Azure Log Analytics. -- [**Generous quotas**](quotas.md) which are overridable to increase limits on a per-account basis.
+- [**Generous quotas**](quotas.md) which can be overridden to increase limits on a per-account basis.
<sup>1</sup> Applications that [scale on CPU or memory load](scale-app.md) can't scale to zero.
container-apps Revisions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions-manage.md
Title: Manage revisions in Azure Container Apps
-description: Manage revisions and traffic splitting in Azure Container Apps.
+description: Manage revisions in Azure Container Apps.
This article described the commands to manage your container app's revisions. Fo
## Updating your container app
-To update a container app, use the `az containerapp update` command. With this command you can modify environment variables, compute resources, scale parameters, and deploy a different image. If your container app update includes [revision-scope changes](revisions.md#revision-scope-changes), a new revision will be generated.
+To update a container app, use the `az containerapp update` command. With this command you can modify environment variables, compute resources, scale parameters, and deploy a different image. If your container app update includes [revision-scope changes](revisions.md#revision-scope-changes), a new revision is generated.
# [Bash](#tab/bash)
echo $RevisionObject
## Revision copy
-To create a new revision based on an existing revision, use the `az containerapp revision copy`. Container Apps will use the configuration of the existing revision, which you then may modify.
+To create a new revision based on an existing revision, use the `az containerapp revision copy`. Container Apps uses the configuration of the existing revision, which you may then modify.
With this command, you can modify environment variables, compute resources, scale parameters, and deploy a different image. You may also use a YAML file to define these and other configuration options and parameters. For more information regarding this command, see [`az containerapp revision copy`](/cli/azure/containerapp/revision#az-containerapp-revision-copy).
az containerapp revision copy \
# [PowerShell](#tab/powershell)
-The following example demonstrates how to copy a container app revision using the Azure CLI command. There is not an equivalent PowerShell command.
+The following example demonstrates how to copy a container app revision using the Azure CLI command. There isn't an equivalent PowerShell command.
```azurecli az containerapp revision copy `
Disable-AzContainerAppRevision @CmdArgs
This command restarts a revision. For more information about this command, see [`az containerapp revision restart`](/cli/azure/containerapp/revision#az-containerapp-revision-restart).
-When you modify secrets in your container app, you'll need to restart the active revisions so they can access the secrets.
+When you modify secrets in your container app, you need to restart the active revisions so they can access the secrets.
# [Bash](#tab/bash)
You can add and remove a label from a revision. For more information about the
To add a label to a revision, use the [`az containerapp revision label add`](/cli/azure/containerapp/revision/label#az-containerapp-revision-label-add) command.
-You can only assign a label to one revision at a time, and a revision can only be assigned one label. If the revision you specify has a label, the add command will replace the existing label.
+You can only assign a label to one revision at a time, and a revision can only be assigned one label. If the revision you specify has a label, the add command replaces the existing label.
This example adds a label to a revision: (Replace the \<PLACEHOLDERS\> with your values.)
az containerapp revision label add `
## Traffic splitting
-Applied by assigning percentage values, you can decide how to balance traffic among different revisions. Traffic splitting rules are assigned by setting weights to different revisions.
-
-To create a traffic rule that always routes traffic to the latest revision, set its `latestRevision` property to `true` and don't set `revisionName`.
-
-The following example shows how to split traffic between three revisions.
-
-```json
-{
- ...
- "configuration": {
- "ingress": {
- "traffic": [
- {
- "revisionName": <REVISION1_NAME>,
- "weight": 50
- },
- {
- "revisionName": <REVISION2_NAME>,
- "weight": 30
- },
- {
- "latestRevision": true,
- "weight": 20
- }
- ]
- }
- }
-}
-```
-
-Each revision gets traffic based on the following rules:
--- 50% of the requests go to REVISION1-- 30% of the requests go to REVISION2-- 20% of the requests go to the latest revision-
-The sum of all revision weights must equal 100.
-
-In this example, replace the `<REVISION*_NAME>` placeholders with revision names in your container app. You access revision names via the [revision list](#revision-list) command.
+Applied by assigning percentage values, you can decide how to balance traffic among different revisions. Traffic splitting rules are assigned by setting weights to different revisions by their name or [label](#revision-labels). For more information, see, [Traffic Splitting](traffic-splitting.md).
## Next steps
container-apps Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md
Title: Revisions in Azure Container Apps
-description: Learn how revisions are created in Azure Container Apps
+description: Learn how revisions are created in Azure Container Apps.
The following diagram shows a container app with two revisions.
:::image type="content" source="media/revisions/azure-container-apps-revisions-traffic-split.png" alt-text="Azure Container Apps: Traffic splitting among revisions":::
-The scenario shown above presumes the container app is in the following state:
+This scenario presumes the container app is in the following state:
-- [Ingress](ingress.md) is enabled, making the container app available via HTTP.
+- [Ingress](ingress-how-to.md) is enabled, making the container app available via HTTP or TCP.
- The first revision was deployed as _Revision 1_. - After the container was updated, a new revision was activated as _Revision 2_.-- [Traffic splitting](revisions-manage.md#traffic-splitting) rules are configured so that _Revision 1_ receives 80% of the requests, and _Revision 2_ receives the remaining 20%.
+- [Traffic splitting](traffic-splitting.md) rules are configured so that _Revision 1_ receives 80% of the requests, and _Revision 2_ receives the remaining 20%.
## Revision name suffix
These parameters include:
- [Secret values](manage-secrets.md) (revisions must be restarted before a container recognizes new secret values) - [Revision mode](#revision-modes) - Ingress configuration including:
- - Turning [ingress](ingress.md) on or off
- - [Traffic splitting rules](revisions-manage.md#traffic-splitting)
+ - Turning [ingress](ingress-how-to.md) on or off
+ - [Traffic splitting rules](traffic-splitting.md)
- Labels - Credentials for private container registries - Dapr settings
By default, a container app is in *single revision mode*. In this mode, when a n
Set the revision mode to *multiple revision mode*, to run multiple revisions of your app simultaneously. While in this mode, new revisions are activated alongside current active revisions.
-For an app implementing external HTTP ingress, you can control the percentage of traffic going to each active revision from your container app's **Revision management** page in the Azure portal, using Azure CLI commands, or in an ARM template. For more information, see [Traffic splitting](revisions-manage.md#traffic-splitting).
+For an app implementing external HTTP ingress, you can control the percentage of traffic going to each active revision from your container app's **Revision management** page in the Azure portal, using Azure CLI commands, or in an ARM template. For more information, see [Traffic splitting](traffic-splitting.md).
## Revision Labels
You can find the label URL in the revision details pane.
## Activation state
-In *multiple revision mode*, revisions remain active until you deactivate them. You can activate and deactivate revisions from your container app's **Revision management** page in the Azure portal or from the Azure CLI.
+In *multiple revision modes*, revisions remain active until you deactivate them. You can activate and deactivate revisions from your container app's **Revision management** page in the Azure portal or from the Azure CLI.
You aren't charged for the inactive revisions. You can have a maximum of 100 revisions, after which the oldest revision is purged.
container-apps Sticky Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sticky-sessions.md
+
+ Title: Session Affinity in Azure Container Apps
+description: How to set session affinity (sticky sessions) in Azure Container Apps.
++++ Last updated : 03/28/2023+
+zone_pivot_groups: arm-portal
++
+# Session Affinity in Azure Container Apps
+
+Session affinity, also known as sticky sessions, is a feature that allows you to route all requests from a client to the same replica. This feature is useful for stateful applications that require a consistent connection to the same replica.
+
+Session stickiness is enforced using HTTP cookies. This feature is available in single revision mode when HTTP ingress is enabled. A client may be routed to a new replica if the previous replica is no longer available.
+
+If your app doesn't require session affinity, we recommend that you don't enable it. With session affinity disabled, ingress distributes requests more evenly across replicas improving the performance of your app.
+
+> [!NOTE]
+> Session affinity is only supported when your app is in [single revision mode](revisions.md#single-revision-mode) and the ingress type is HTTP.
+
+## Configure session affinity
++
+Session affinity is configured by setting the `affinity` property in the `ingress.stickySessions` configuration section. The following example shows how to configure session affinity for a container app:
+
+```json
+{
+ ...
+ "configuration": {
+ "ingress": {
+ "external": true,
+ "targetPort": 80,
+ "transport": "auto",
+ "stickySessions": {
+ "affinity": "sticky"
+ }
+ }
+ }
+}
+```
++++
+You can enable session affinity when you create your container app via the Azure portal. To enable session affinity:
+
+1. On the **Create Container App** page, select the **App settings** tab.
+1. In the **Application ingress settings** section, select **Enabled** for the **Session affinity** setting.
+++
+You can also enable or disable session affinity after your container app is created. To enable session affinity:
+
+1. Go to your app in the portal.
+1. Select **Ingress**.
+1. You can enable or disable **Session affinity** by selecting or deselecting **Enabled**.
+1. Select **Save**.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure ingress](ingress-how-to.md)
container-apps Traffic Splitting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/traffic-splitting.md
+
+ Title: Traffic splitting in Azure Container Apps
+description: Send a portion of an apps traffic to different revisions in Azure Container Apps.
++++ Last updated : 03/28/2023+
+zone_pivot_groups: arm-azure-cli-portal
++
+# Traffic splitting in Azure Container Apps
+
+By default, when ingress is enabled, all traffic is routed to the latest deployed revision. When you enable [multiple revision mode](revisions.md#revision-modes) in your container app, you can split incoming traffic between active revisions.
+
+Traffic splitting is useful for testing updates to your container app. You can use traffic splitting to gradually phase in a new revision in [blue-green deployments](https://martinfowler.com/bliki/BlueGreenDeployment.html) or in [A/B testing](https://wikipedia.org/wiki/A/B_testing).
+
+Traffic splitting is based on the weight (percentage) of traffic that is routed to each revision. The combined weight of all traffic split rules must equal 100%. You can specify revision by revision name or [revision label](revisions.md#revision-labels).
+
+This article shows you how to configure traffic splitting rules for your container app.
+To run the following examples, you need a container app with multiple revisions.
++
+## Configure traffic splitting
++
+Configure traffic splitting between revisions using the [`az containerapp ingress traffic set`](/cli/azure/containerapp/revision#az-containerapp-ingress-traffic-set) command. You can specify the revisions by name with the `--revision-weight` parameter or by revision label with the `--label-weight` parameter.
+
+The following command sets the traffic weight for each revision to 50%:
+
+```azurecli
+az containerapp ingress traffic set \
+ --name <APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --revision-weight <REVISION_1>=50 <REVISION_2>=50
+```
+
+Make sure to replace the placeholder values surrounded by `<>` with your own values.
+
+This command sets the traffic weight for revision <LABEL_1> to 80% and revision <LABEL_2> to 20%:
+
+```azurecli
+az containerapp ingress traffic set \
+ --name <APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --label-weight <LABEL_1>=80 <LABEL_2>=20
+
+```
+++
+1. Go to your container app in the [Azure portal](https://portal.azure.com).
+1. Select **Revision management** from the left side menu.
+1. If the revision mode is *Single*, set the mode to *multiple*.
+ 1. Select **Choose revision mode**.
+ 1. Select **Multiple: Several revisions active simultaneously**.
+ 1. Select **Apply**.
+ 1. Wait for the **Revision Mode** to update to **Multiple**.
+ :::image type="content" source="media/ingress/screenshot-revision-management-mode.png" alt-text="Screenshot of the revision management revision mode setting.":::
+1. Select **Show inactive revisions**.
+1. If you don't have multiple revisions, you can create a new revision.
+ 1. Select **Create new revision**.
+ 1. You can use the default settings or customize the revision.
+ 1. Enter a **Name/Suffix** for the revision.
+ 1. Select **Create**.
+ :::image type="content" source="media/ingress/screenshot-create-deploy-new-revision.png" alt-text="Screenshot of Create and deploy new revision.":::
+ 1. Wait for the revision to deploy.
+1. Select **Active** for the revisions you want to route traffic to.
+1. Enter the percentage of traffic you want to route to each revision in the **Traffic** column. The combined percentage of all traffic must equal 100%.
+1. Select **Save**.
+++
+Enable traffic splitting by adding the `configuration.ingress.traffic` properties to the `ingress` section of your container app template. You can specify the revisions by name with the `revisionName` property or by revision label with the `label` property.
+
+The following example sets 100% of traffic to the latest deployed revision:
+
+```json
+{
+ ...
+ "configuration": {
+ "ingress": {
+ "external": true,
+ "targetPort": 80,
+ "allowInsecure": false,
+ "traffic": [
+ {
+ "latestRevision": true,
+ "weight": 100
+ }
+ ]
+ },
+ },
+```
+
+The following example shows traffic splitting between two revisions by name:
+
+```json
+{
+ ...
+ "configuration": {
+ "ingress": {
+ "external": true,
+ "targetPort": 80,
+ "allowInsecure": false,
+ "traffic": [
+ {
+ "revisionName": "my-example-app--5g3ty20",
+ "weight": 50
+ },
+ {
+ "revisionName": "my-example-app--qcfkbsv",
+ "weight": 50
+ }
+ ],
+ },
+ },
+```
+
+The following example shows traffic splitting between two revisions by label:
+
+```json
+{
+ ...
+ "configuration": {
+ "ingress": {
+ "external": true,
+ "targetPort": 80,
+ "allowInsecure": false,
+ "traffic": [
+ {
+ "weight": 50,
+ "label": "v-2"
+ },
+ {
+ "weight": 50,
+ "label": "v-1"
+ }
+ ],
+ },
+ },
+```
++
+## Use cases
+
+The following scenarios describe configuration settings for common use cases. The examples are shown in JSON format, but you can also use the Azure portal or Azure CLI to configure traffic splitting.
+
+### Rapid iteration
+
+In situations where you're frequently iterating development of your container app, you can set traffic rules to always shift all traffic to the latest deployed revision.
+
+The following example template routes all traffic to the latest deployed revision:
+
+```json
+"ingress": {
+ "traffic": [
+ {
+ "latestRevision": true,
+ "weight": 100
+ }
+ ]
+}
+```
+
+Once you're satisfied with the latest revision, you can lock traffic to that revision by updating the `ingress` settings to:
+
+```json
+"ingress": {
+ "traffic": [
+ {
+ "latestRevision": false, // optional
+ "revisionName": "myapp--knowngoodrevision",
+ "weight": 100
+ }
+ ]
+}
+```
+
+### Update existing revision
+
+Consider a situation where you have a known good revision that's serving 100% of your traffic, but you want to issue an update to your app. You can deploy and test new revisions using their direct endpoints without affecting the main revision serving the app.
+
+Once you're satisfied with the updated revision, you can shift a portion of traffic to the new revision for testing and verification.
+
+The following template moves 20% of traffic over to the updated revision:
+
+```json
+"ingress": {
+ "traffic": [
+ {
+ "revisionName": "myapp--knowngoodrevision",
+ "weight": 80
+ },
+ {
+ "revisionName": "myapp--newerrevision",
+ "weight": 20
+ }
+ ]
+}
+```
+
+### Staging microservices
+
+When building microservices, you may want to maintain production and staging endpoints for the same app. Use labels to ensure that traffic doesn't switch between different revisions.
+
+The following example template applies labels to different revisions.
+
+```json
+"ingress": {
+ "traffic": [
+ {
+ "revisionName": "myapp--knowngoodrevision",
+ "weight": 100
+ },
+ {
+ "revisionName": "myapp--98fdgt",
+ "weight": 0,
+ "label": "staging"
+ }
+ ]
+}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure ingress](ingress-how-to.md)
container-registry Container Registry Auto Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md
The `acr purge` container command deletes images by tag in a repository that mat
`acr purge` is designed to run as a container command in an [ACR Task](container-registry-tasks-overview.md), so that it authenticates automatically with the registry where the task runs and performs actions there. The task examples in this article use the `acr purge` command [alias](container-registry-tasks-reference-yaml.md#aliases) in place of a fully qualified container image command.
+> [!IMPORTANT]
+- The standard command to execute the `acr purge` is `az acr run --registry <YOUR_REGISTRY> --cmd 'acr purge --optional parameter' `.
+- We recommend running the complete `acr purge` command to use the ACR Purge. For example, run the `acr purge --help` as `az acr run --registry <YOUR_REGISTRY> --cmd 'acr purge --help' `.
+ At a minimum, specify the following when you run `acr purge`: * `--filter` - A repository name *regular expression* and a tag name *regular expression* to filter images in the registry. Examples: `--filter "hello-world:.*"` matches all tags in the `hello-world` repository, `--filter "hello-world:^1.*"` matches tags beginning with `1` in the `hello-world` repository, and `--filter ".*/cache:.*"` matches all tags in the repositories ending in `/cache`. You can also pass multiple `--filter` parameters.
-* `--ago` - A Go-style [duration string](https://go.dev/pkg/time/) to indicate a duration beyond which images are deleted. The duration consists of a sequence of one or more decimal numbers, each with a unit suffix. Valid time units include "d" for days, "h" for hours, and "m" for minutes. For example, `--ago 2d3h6m` selects all filtered images last modified more than 2 days, 3 hours, and 6 minutes ago, and `--ago 1.5h` selects images last modified more than 1.5 hours ago.
+* `--ago` - A Go-style [duration string](https://go.dev/pkg/time/) to indicate a duration beyond which images are deleted. The duration consists of a sequence of one or more decimal numbers, each with a unit suffix. Valid time units include "d" for days, "h" for hours, and "m" for minutes. For example, `--ago 2d3h6m` selects all filtered images last modified more than two days, 3 hours, and 6 minutes ago, and `--ago 1.5h` selects images last modified more than 1.5 hours ago.
`acr purge` supports several optional parameters. The following two are used in examples in this article:
container-registry Container Registry Transfer Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-images.md
Please complete the prerequisites outlined [here](./container-registry-transfer-
- You have an existing Keyvault with a secret containing a valid SAS token with the necessary permissions in both clouds. - You have a recent version of Az CLI installed in both clouds.
+> [!IMPORTANT]
+- The ACR Transfer supports artifacts with the layer size limits to 8 GB due to the technical limitations.
+ ## Consider using the Az CLI extension
-For most non-automated use-cases, we recommend using the Az CLI Extension if possible. You can view documentation for the Az CLI Extension [here](./container-registry-transfer-cli.md).
+For most nonautomated use-cases, we recommend using the Az CLI Extension if possible. You can view documentation for the Az CLI Extension [here](./container-registry-transfer-cli.md).
## Create ExportPipeline with Resource Manager
az storage blob list \
Use the AzCopy tool or other methods to [transfer blob data](../storage/common/storage-use-azcopy-v10.md#transfer-data) from the source storage account to the target storage account.
-For example, the following [`azcopy copy`](../storage/common/storage-ref-azcopy-copy.md) command copies myblob from the *transfer* container in the source account to the *transfer* container in the target account. If the blob exists in the target account, it's overwritten. Authentication uses SAS tokens with appropriate permissions for the source and target containers. (Steps to create tokens are not shown.)
+For example, the following [`azcopy copy`](../storage/common/storage-ref-azcopy-copy.md) command copies myblob from the *transfer* container in the source account to the *transfer* container in the target account. If the blob exists in the target account, it's overwritten. Authentication uses SAS tokens with appropriate permissions for the source and target containers. (Steps to create tokens aren't shown.)
```console azcopy copy \
az acr repository list --name <target-registry-name>
## Redeploy PipelineRun resource
-If redeploying a PipelineRun resource with *identical properties*, you must leverage the **forceUpdateTag** property. This property indicates that the PipelineRun resource should be recreated even if the configuration has not changed. Please ensure forceUpdateTag is different each time you redeploy the PipelineRun resource. The example below recreates a PipelineRun for export. The current datetime is used to set forceUpdateTag, thereby ensuring this property is always unique.
+If redeploying a PipelineRun resource with *identical properties*, you must leverage the **forceUpdateTag** property. This property indicates that the PipelineRun resource should be recreated even if the configuration has not changed. Ensure forceUpdateTag is different each time you redeploy the PipelineRun resource. The example below recreates a PipelineRun for export. The current datetime is used to set forceUpdateTag, thereby ensuring this property is always unique.
```console CURRENT_DATETIME=`date +"%Y-%m-%d:%T"`
container-registry Tutorial Troubleshoot Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-customer-managed-keys.md
If you try to remove a user-assigned or system-assigned managed identity that yo
Azure resource '/subscriptions/xxxx/resourcegroups/myGroup/providers/Microsoft.ContainerRegistry/registries/myRegistry' does not have access to identity 'xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx' Try forcibly adding the identity to the registry <registry name>. For more information on bring your own key, please visit 'https://aka.ms/acr/cmk'. ```
-You also won't be able to change (rotate) the encryption key. The resolution steps depend on the type of identity that you used for encryption.
+You're unable to change (rotate) the encryption key. The resolution steps depend on the type of identity that you used for encryption.
### Removing a user-assigned identity
If you enable a key vault firewall or virtual network after creating an encrypte
If the problem persists, contact Azure Support.
+## Identity expiry error
+
+The identity attached to a registry is set for autorenewal to avoid expiry. If you disassociate an identity from a registry, an error message occurs explaining to you can't remove the identity in use for CMK. Attempting to remove the identity jeopardizes the autorenewal of identity. The artifact pull/push operations work until the identity expires (Usually three months). After the identity expiration, you'll see the HTTP 403 with an error message "The identity associated with the registry is inactive. This could be due to attempted removal of the identity. Reassign the identity manually".
+
+You have to reassign the identity back to registry explicitly.
+
+1. Run the [az acr identity assign](/cli/azure/acr/identity/#az-acr-identity-assign) command to reassign the identity manually.
+
+ - For example,
+
+ ```azurecli-interactive
+ az acr identity assign -n myRegistry \
+ --identities "/subscriptions/mysubscription/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentity"
+ ```
+ ## Accidental deletion of a key vault or key
-Deletion of the key vault, or the key, that's used to encrypt a registry with a customer-managed key will make the registry's content inaccessible. If [soft delete](../key-vault/general/soft-delete-overview.md) is enabled in the key vault (the default option), you can recover a deleted vault or key vault object and resume registry operations.
+Deletion of the key vault, or the key, that's used to encrypt a registry with a customer-managed key makes the registry's content inaccessible. If [soft delete](../key-vault/general/soft-delete-overview.md) is enabled in the key vault (the default option), you can recover a deleted vault or key vault object and resume registry operations.
## Next steps
cosmos-db Bulk Executor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/bulk-executor-overview.md
description: Perform bulk operations in Azure Cosmos DB through bulk import and
- Previously updated : 05/28/2019+ Last updated : 3/30/2023 # Azure Cosmos DB bulk executor library overview [!INCLUDE[NoSQL](includes/appliesto-nosql.md)]
-
-Azure Cosmos DB is a fast, flexible, and globally distributed database service that is designed to elastically scale out to support:
-* Large read and write throughput (millions of operations per second).
-* Storing high volumes of (hundreds of terabytes, or even more) transactional and operational data with predictable millisecond latency.
+Azure Cosmos DB is a fast, flexible, and globally distributed database service that elastically scales out to support:
-The bulk executor library helps you leverage this massive throughput and storage. The bulk executor library allows you to perform bulk operations in Azure Cosmos DB through bulk import and bulk update APIs. You can read more about the features of bulk executor library in the following sections.
+* Large read and write throughput, on the order of millions of operations per second.
+* Storing high volumes of transactional and operational data, on the order of hundreds of terabytes or even more, with predictable millisecond latency.
-> [!NOTE]
-> Currently, bulk executor library supports import and update operations and this library is supported by Azure Cosmos DB API for NoSQL and Gremlin accounts only.
+The bulk executor library helps you use this massive throughput and storage. The bulk executor library allows you to perform bulk operations in Azure Cosmos DB through bulk import and bulk update APIs. You can read more about the features of bulk executor library in the following sections.
+
+> [!NOTE]
+> Currently, bulk executor library supports import and update operations. Azure Cosmos DB API supports this library for NoSQL and Gremlin accounts only.
> [!IMPORTANT]
-> The bulk executor library is not currently supported on [serverless](serverless.md) accounts. On .NET, it is recommended to use the [bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) available in the V3 version of the SDK.
-
+> The bulk executor library is not currently supported on [serverless](serverless.md) accounts. On .NET, we recommend that you use the [bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) available in the V3 version of the SDK.
+ ## Key features of the bulk executor library
-
-* It significantly reduces the client-side compute resources needed to saturate the throughput allocated to a container. A single threaded application that writes data using the bulk import API achieves 10 times greater write throughput when compared to a multi-threaded application that writes data in parallel while saturating the client machine's CPU.
-* It abstracts away the tedious tasks of writing application logic to handle rate limiting of request, request timeouts, and other transient exceptions by efficiently handling them within the library.
+* Using the bulk executor library significantly reduces the client-side compute resources needed to saturate the throughput allocated to a container. A single threaded application that writes data using the bulk import API achieves 10 times greater write throughput when compared to a multi-threaded application that writes data in parallel while it saturates the client machine's CPU.
+
+* The bulk executor library abstracts away the tedious tasks of writing application logic to handle rate limiting of request, request timeouts, and other transient exceptions. It efficiently handles them within the library.
+
+* It provides a simplified mechanism for applications to perform bulk operations to scale out. A single bulk executor instance that runs on an Azure virtual machine can consume greater than 500 K RU/s. You can achieve a higher throughput rate by adding more instances on individual client virtual machines.
+
+* The bulk executor library can bulk import more than a terabyte of data within an hour by using a scale-out architecture.
+
+* It can bulk update existing data in Azure Cosmos DB containers as patches.
+
+## How does the bulk executor operate?
+
+When a bulk operation to import or update documents is triggered with a batch of entities, they're initially shuffled into buckets that correspond to their Azure Cosmos DB partition key range. Within each bucket that corresponds to a partition key range, they're broken down into mini-batches.
+
+Each mini-batch acts as a payload that is committed on the server-side. The bulk executor library has built in optimizations for concurrent execution of the mini-batches both within and across partition key ranges.
-* It provides a simplified mechanism for applications performing bulk operations to scale out. A single bulk executor instance running on an Azure VM can consume greater than 500K RU/s and you can achieve a higher throughput rate by adding additional instances on individual client VMs.
-
-* It can bulk import more than a terabyte of data within an hour by using a scale-out architecture.
+The following diagram illustrates how bulk executor batches data into different partition keys:
-* It can bulk update existing data in Azure Cosmos DB containers as patches.
-
-## How does the bulk executor operate?
-When a bulk operation to import or update documents is triggered with a batch of entities, they are initially shuffled into buckets corresponding to their Azure Cosmos DB partition key range. Within each bucket that corresponds to a partition key range, they are broken down into mini-batches and each mini-batch act as a payload that is committed on the server-side. The bulk executor library has built in optimizations for concurrent execution of these mini-batches both within and across partition key ranges. Following image illustrates how bulk executor batches data into different partition keys:
+The bulk executor library makes sure to maximally utilize the throughput allocated to a collection. It uses an [AIMD-style congestion control mechanism](https://tools.ietf.org/html/rfc5681) for each Azure Cosmos DB partition key range to efficiently handle rate limiting and timeouts.
+For more information about sample applications that consume the bulk executor library, see [Use the bulk executor .NET library to perform bulk operations in Azure Cosmos DB](nosql/bulk-executor-dotnet.md) and [Perform bulk operations on Azure Cosmos DB data](bulk-executor-java.md).
-The bulk executor library makes sure to maximally utilize the throughput allocated to a collection. It uses an [AIMD-style congestion control mechanism](https://tools.ietf.org/html/rfc5681) for each Azure Cosmos DB partition key range to efficiently handle rate limiting and timeouts.
+For reference information, see [.NET bulk executor library](nosql/sdk-dotnet-bulk-executor-v2.md) and [Java bulk executor library](nosql/sdk-java-bulk-executor-v2.md).
-## Next Steps
+## Next steps
-* Learn more by trying out the sample applications consuming the bulk executor library in [.NET](nosql/bulk-executor-dotnet.md) and [Java](bulk-executor-java.md).
-* Check out the bulk executor SDK information and release notes in [.NET](nosql/sdk-dotnet-bulk-executor-v2.md) and [Java](nosql/sdk-java-bulk-executor-v2.md).
-* The bulk executor library is integrated into the Azure Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) article.
-* The bulk executor library is also integrated into a new version of [Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md) for Azure Data Factory to copy data.
+* [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)
+* [Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md)
cosmos-db Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-explorer.md
Title: Use Azure Cosmos DB Explorer to manage your data
-description: Azure Cosmos DB Explorer is a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB.
+description: Learn about Azure Cosmos DB Explorer, a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB.
Previously updated : 09/23/2020 Last updated : 03/02/2023
-# Work with data using Azure Cosmos DB Explorer
+# Work with data using Azure Cosmos DB Explorer
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] Azure Cosmos DB Explorer is a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB. Azure Cosmos DB Explorer is equivalent to the existing **Data Explorer** tab that is available in Azure portal when you create an Azure Cosmos DB account. The key advantages of Azure Cosmos DB Explorer over the existing Data explorer are:
-* You have a full screen real-estate to view your data, run queries, stored procedures, triggers, and view their results.
-
-* You can provide temporary or permanent read or read-write access to your database account and its collections to other users who do not have access to Azure portal or subscription.
-
-* You can share the query results with other users who do not have access to Azure portal or subscription.
+- You have a full screen real-estate to view your data, run queries, stored procedures, triggers, and view their results.
+- You can provide read or read-write access to your database account and its collections to other users who don't have access to Azure portal or subscription.
+- You can share the query results with other users who don't have access to Azure portal or subscription.
## Access Azure Cosmos DB Explorer
-1. Sign in to [Azure portal](https://portal.azure.com/).
+1. Sign in to [Azure portal](https://portal.azure.com/).
-2. From **All resources**, find and navigate to your Azure Cosmos DB account, select Keys, and copy the **Primary Connection String**.
+1. From **All resources**, find and navigate to your Azure Cosmos DB account, select **Keys**, and copy the **Primary Connection String**. You can select either:
-3. Go to https://cosmos.azure.com/, paste the connection string and select **Connect**. By using the connection string, you can access the Azure Cosmos DB Explorer without any time limits.
+ - **Read-write Keys**. When you share the Read-write primary connection string other users, they can view and modify the databases, collections, queries, and other resources associated with that specific account.
+ - **Read-only Keys**. When you share the read-only primary connection string with other users, they can view the databases, collections, queries, and other resources associated with that specific account. For example, if you want to share results of a query with your teammates who don't have access to Azure portal or your Azure Cosmos DB account, you can provide them with this value.
- If you want to provide other users temporary access to your Azure Cosmos DB account, you can do so by using the read-write and read access URLs.
+1. Go to [https://cosmos.azure.com/](https://cosmos.azure.com/).
-4. Open the **Data Explorer** blade, select **Open Full Screen**. From the pop-up dialog, you can view two access URLs ΓÇô **Read-Write** and **Read**. These URLs allow you to share your Azure Cosmos DB account temporarily with other users. Access to the account expires in 24 hours after which you can reconnect by using a new access URL or the connection string.
+1. Select **Connect to your account with connection string**, paste the connection string, and select **Connect**.
- **Read-Write** ΓÇô When you share the Read-Write URL with other users, they can view and modify the databases, collections, queries, and other resources associated with that specific account.
+To open Azure Cosmos DB Explorer from the Azure portal:
- **Read** - When you share the read-only URL with other users, they can view the databases, collections, queries, and other resources associated with that specific account. For example, if you want to share results of a query with your teammates who don't have access to Azure portal or your Azure Cosmos DB account, you can provide them with this URL.
+1. Select the **Data Explorer** in the left menu, then select **Open Full Screen**.
- Choose the type of access you'd like to open the account with and click **Open**. After you open the explorer, the experience is same as you had with the Data Explorer tab in Azure portal.
+ :::image type="content" source="./media/data-explorer/open-data-explorer.png" alt-text="Screenshot shows Data Explorer page with Open Full Screen highlighted." lightbox="./media/data-explorer/open-data-explorer.png":::
- :::image type="content" source="./media/data-explorer/open-data-explorer-with-access-url.png" alt-text="Open Azure Cosmos DB Explorer":::
+1. In the **Open Full Screen** dialog, select **Open**.
## Known issues
-Currently the **Open Full Screen** experience that allows you to share temporary read-write or read access is not yet supported for Azure Cosmos DB API for Gremlin and Table accounts. You can still view your Gremlin and API for Table accounts by passing the connection string to Azure Cosmos DB Explorer.
-
-Currently, viewing documents that contain a UUID is not supported in Data Explorer. This does not affect loading collections, only viewing individual documents or queries that include these documents. To view and manage these documents, users should continue to use the tool that was originally used to create these documents.
+Currently, viewing documents that contain a UUID isn't supported in Data Explorer. This limitation doesn't affect loading collections, only viewing individual documents or queries that include these documents. To view and manage these documents, users should continue to use the tool that was originally used to create these documents.
-Customers receiving HTTP-401 errors may be due to insufficient Azure RBAC permissions for the customer's Azure account, particularly if the account has a custom role. Any custom roles must have `Microsoft.DocumentDB/databaseAccounts/listKeys/*` action to use Data Explorer if signing in using their Azure Active Directory credentials.
+Customers receiving HTTP-401 errors may be due to insufficient Azure RBAC permissions for your Azure account, particularly if the account has a custom role. Any custom roles must have `Microsoft.DocumentDB/databaseAccounts/listKeys/*` action to use Data Explorer if signing in using their Azure Active Directory credentials.
## Next steps
-Now that you have learned how to get started with Azure Cosmos DB Explorer to manage your data, next you can:
+Now that you've learned how to get started with Azure Cosmos DB Explorer to manage your data, next you can:
-* Start defining [queries](nosql/query/getting-started.md) using SQL syntax and perform [server side programming](stored-procedures-triggers-udfs.md) by using stored procedures, UDFs, triggers.
+- [Getting started with queries](nosql/query/getting-started.md)
+- [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md)
cosmos-db Modeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/modeling.md
Title: 'Graph data modeling for Azure Cosmos DB for Gremlin'
-description: Learn how to model a graph database by using Azure Cosmos DB for Gremlin. This article describes when to use a graph database and best practices to model entities and relationships.
+ Title: Graph data modeling with Azure Cosmos DB for Apache Gremlin
+description: Learn how to model a graph database by using Azure Cosmos DB for Apache Gremlin, and learn best practices to model entities and relationships.
Previously updated : 12/02/2019 Last updated : 03/14/2023
-# Graph data modeling for Azure Cosmos DB for Gremlin
+# Graph data modeling with Azure Cosmos DB for Apache Gremlin
[!INCLUDE[Gremlin](../includes/appliesto-gremlin.md)]
-The following document is designed to provide graph data modeling recommendations. This step is vital in order to ensure the scalability and performance of a graph database system as the data evolves. An efficient data model is especially important with large-scale graphs.
+This article provides recommendations for the use of graph data models. These best practices are vital for ensuring the scalability and performance of a graph database system as the data evolves. An efficient data model is especially important for large-scale graphs.
## Requirements The process outlined in this guide is based on the following assumptions:
- * The **entities** in the problem-space are identified. These entities are meant to be consumed _atomically_ for each request. In other words, the database system isn't designed to retrieve a single entity's data in multiple query requests.
- * There is an understanding of **read and write requirements** for the database system. These requirements will guide the optimizations needed for the graph data model.
- * The principles of the [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) are well understood.
+
+* The *entities* in the problem-space are identified. These entities are meant to be consumed *atomically* for each request. In other words, the database system isn't designed to retrieve a single entity's data in multiple query requests.
+* There's an understanding of *read and write requirements* for the database system. These requirements guide the optimizations needed for the graph data model.
+* The principles of the [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) are well understood.
## When do I need a graph database?
-A graph database solution can be optimally applied if the entities and relationships in a data domain have any of the following characteristics:
+A graph database solution can be optimally used if the entities and relationships in a data domain have any of the following characteristics:
-* The entities are **highly connected** through descriptive relationships. The benefit in this scenario is the fact that the relationships are persisted in storage.
-* There are **cyclic relationships** or **self-referenced entities**. This pattern is often a challenge when using relational or document databases.
-* There are **dynamically evolving relationships** between entities. This pattern is especially applicable to hierarchical or tree-structured data with many levels.
-* There are **many-to-many relationships** between entities.
-* There are **write and read requirements on both entities and relationships**.
+* The entities are *highly connected* through descriptive relationships. The benefit in this scenario is that the relationships persist in storage.
+* There are *cyclic relationships* or *self-referenced entities*. This pattern is often a challenge when you use relational or document databases.
+* There are *dynamically evolving relationships* between entities. This pattern is especially applicable to hierarchical or tree-structured data with many levels.
+* There are *many-to-many relationships* between entities.
+* There are *write and read requirements on both entities and relationships*.
-If the above criteria is satisfied, it's likely that a graph database approach will provide advantages for **query complexity**, **data model scalability**, and **query performance**.
+If the above criteria are satisfied, a graph database approach likely provides advantages for *query complexity*, *data model scalability*, and *query performance*.
-The next step is to determine if the graph is going to be used for analytic or transactional purposes. If the graph is intended to be used for heavy computation and data processing workloads, it would be worth to explore the [Cosmos DB Spark connector](../nosql/quickstart-spark.md) and the use of the [GraphX library](https://spark.apache.org/graphx/).
+The next step is to determine if the graph is going to be used for analytic or transactional purposes. If the graph is intended to be used for heavy computation and data processing workloads, it's worth exploring the [Cosmos DB Spark connector](../nosql/quickstart-spark.md) and the [GraphX library](https://spark.apache.org/graphx/).
## How to use graph objects
-The [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) defines two types of objects **Vertices** and **Edges**.
+The [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) defines two types of objects: *vertices* and *edges*.
-The following are the best practices for the properties in the graph objects:
+The following are best practices for the properties in the graph objects:
| Object | Property | Type | Notes | | | | | |
-| Vertex | ID | String | Uniquely enforced per partition. If a value isn't supplied upon insertion, an auto-generated GUID will be stored. |
-| Vertex | label | String | This property is used to define the type of entity that the vertex represents. If a value isn't supplied, a default value "vertex" will be used. |
-| Vertex | properties | String, Boolean, Numeric | A list of separate properties stored as key-value pairs in each vertex. |
-| Vertex | partition key | String, Boolean, Numeric | This property defines where the vertex and its outgoing edges will be stored. Read more about [graph partitioning](partitioning.md). |
-| Edge | ID | String | Uniquely enforced per partition. Auto-generated by default. Edges usually don't have the need to be uniquely retrieved by an ID. |
-| Edge | label | String | This property is used to define the type of relationship that two vertices have. |
-| Edge | properties | String, Boolean, Numeric | A list of separate properties stored as key-value pairs in each edge. |
+| Vertex | ID | String | Uniquely enforced per partition. If a value isn't supplied upon insertion, an auto-generated GUID is stored. |
+| Vertex | Label | String | This property is used to define the type of entity that the vertex represents. If a value isn't supplied, a default value *vertex* is used. |
+| Vertex | Properties | String, boolean, numeric | A list of separate properties stored as key-value pairs in each vertex. |
+| Vertex | Partition key | String, boolean, numeric | This property defines where the vertex and its outgoing edges are stored. Read more about [graph partitioning](partitioning.md). |
+| Edge | ID | String | Uniquely enforced per partition. Auto-generated by default. Edges usually don't need to be uniquely retrieved by an ID. |
+| Edge | Label | String | This property is used to define the type of relationship that two vertices have. |
+| Edge | Properties | String, boolean, numeric | A list of separate properties stored as key-value pairs in each edge. |
> [!NOTE]
-> Edges don't require a partition key value, since its value is automatically assigned based on their source vertex. Learn more in the [graph partitioning](partitioning.md) article.
+> Edges don't require a partition key value, since the value is automatically assigned based on their source vertex. Learn more in the [Using a partitioned graph in Azure Cosmos DB](partitioning.md).
## Entity and relationship modeling guidelines
-The following are a set of guidelines to approach data modeling for an Azure Cosmos DB for Gremlin graph database. These guidelines assume that there's an existing definition of a data domain and queries for it.
+The following guidelines help you approach data modeling for an [Azure Cosmos DB for Apache Gremlin](introduction.md) graph database. These guidelines assume that there's an existing definition of a data domain and queries for it.
> [!NOTE]
-> The steps outlined below are presented as recommendations. The final model should be evaluated and tested before its consideration as production-ready. Additionally, the recommendations below are specific to Azure Cosmos DB's Gremlin API implementation.
+> The following steps are presented as recommendations. You should evaluate and test the final model before considering it as production-ready. Additionally, the recommendations are specific to Azure Cosmos DB's Gremlin API implementation.
### Modeling vertices and properties
-The first step for a graph data model is to map every identified entity to a **vertex object**. A one to one mapping of all entities to vertices should be an initial step and subject to change.
+The first step for a graph data model is to map every identified entity to a *vertex object*. A one-to-one mapping of all entities to vertices should be an initial step and subject to change.
-One common pitfall is to map properties of a single entity as separate vertices. Consider the example below, where the same entity is represented in two different ways:
+One common pitfall is to map properties of a single entity as separate vertices. Consider the following example, where the same entity is represented in two different ways:
* **Vertex-based properties**: In this approach, the entity uses three separate vertices and two edges to describe its properties. While this approach might reduce redundancy, it increases model complexity. An increase in model complexity can result in added latency, query complexity, and computation cost. This model can also present challenges in partitioning.
+ :::image type="content" source="./media/modeling/graph-modeling-1.png" alt-text="Diagram of entity model with vertices for properties.":::
-* **Property-embedded vertices**: This approach takes advantage of the key-value pair list to represent all the properties of the entity inside a vertex. This approach provides reduced model complexity, which will lead to simpler queries and more cost-efficient traversals.
+* **Property-embedded vertices**: This approach takes advantage of the key-value pair list to represent all the properties of the entity inside a vertex. This approach reduces model complexity, which leads to simpler queries and more cost-efficient traversals.
+ :::image type="content" source="./media/modeling/graph-modeling-2.png" alt-text="Diagram of the Luis vertex from the previous diagram with ID, label, and properties.":::
> [!NOTE]
-> The above examples show a simplified graph model to only show the comparison between the two ways of dividing entity properties.
+> The preceding diagrams show a simplified graph model that only compares the two ways of dividing entity properties.
-The **property-embedded vertices** pattern generally provides a more performant and scalable approach. The default approach to a new graph data model should gravitate towards this pattern.
+The property-embedded vertices pattern generally provides a more performant and scalable approach. The default approach to a new graph data model should gravitate toward this pattern.
-However, there are scenarios where referencing to a property might provide advantages. For example: if the referenced property is updated frequently. Using a separate vertex to represent a property that is constantly changed would minimize the amount of write operations that the update would require.
+However, there are scenarios where referencing a property might provide advantages. For example, if the referenced property is updated frequently. Use a separate vertex to represent a property that's constantly changing to minimize the amount of write operations that the update requires.
-### Relationship modeling with edge directions
+### Relationship models with edge directions
-After the vertices are modeled, the edges can be added to denote the relationships between them. The first aspect that needs to be evaluated is the **direction of the relationship**.
+After the vertices are modeled, the edges can be added to denote the relationships between them. The first aspect that needs to be evaluated is the *direction of the relationship*.
-Edge objects have a default direction that is followed by a traversal when using the `out()` or `outE()` function. Using this natural direction results in an efficient operation, since all vertices are stored with their outgoing edges.
+Edge objects have a default direction that's followed by a traversal when using the `out()` or `outE()` functions. Using this natural direction results in an efficient operation, since all vertices are stored with their outgoing edges.
-However, traversing in the opposite direction of an edge, using the `in()` function, will always result in a cross-partition query. Learn more about [graph partitioning](partitioning.md). If there's a need to constantly traverse using the `in()` function, it's recommended to add edges in both directions.
+However, traversing in the opposite direction of an edge, using the `in()` function, always results in a cross-partition query. Learn more about [graph partitioning](partitioning.md). If there's a need to constantly traverse using the `in()` function, it's recommended to add edges in both directions.
-You can determine the edge direction by using the `.to()` or `.from()` predicates to the `.addE()` Gremlin step. Or by using the [bulk executor library for Gremlin API](bulk-executor-dotnet.md).
+You can determine the edge direction by using the `.to()` or `.from()` predicates with the `.addE()` Gremlin step. Or by using the [bulk executor library for Gremlin API](bulk-executor-dotnet.md).
> [!NOTE] > Edge objects have a direction by default.
-### Relationship labeling
+### Relationship labels
-Using descriptive relationship labels can improve the efficiency of edge resolution operations. This pattern can be applied in the following ways:
+Using descriptive relationship labels can improve the efficiency of edge resolution operations. You can apply this pattern in the following ways:
* Use non-generic terms to label a relationship. * Associate the label of the source vertex to the label of the target vertex with the relationship name.
-The more specific the label that the traverser will use to filter the edges, the better. This decision can have a significant impact on query cost as well. You can evaluate the query cost at any time [using the executionProfile step](execution-profile.md).
+The more specific the label that the traverser uses to filter the edges, the better. This decision can have a significant effect on query cost as well. You can evaluate the query cost at any time by using the [executionProfile step](execution-profile.md).
+## Next steps
-## Next steps:
-* Check out the list of supported [Gremlin steps](support.md).
+* Check out the list of [supported Gremlin steps](support.md).
* Learn about [graph database partitioning](partitioning.md) to deal with large-scale graphs.
-* Evaluate your Gremlin queries using the [Execution Profile step](execution-profile.md).
-* Third-party Graph [design data model](modeling-tools.md)
+* Evaluate your Gremlin queries using the [execution profile step](execution-profile.md).
+* Third-party graph [design data model](modeling-tools.md).
cosmos-db Local Emulator Export Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-export-ssl-certificates.md
Title: Export the Azure Cosmos DB Emulator certificates
-description: Learn how to export the Azure Cosmos DB Emulator certificate for use with Java, Python, and Node.js apps. The certificates should be exported and used for languages and runtime environments that don't use the Windows Certificate Store.
+description: Learn how to export the Azure Cosmos DB Emulator certificate for use with languages and environments that don't integrate with the Windows Certificate Store.
Previously updated : 09/17/2020 Last updated : 03/16/2023
The Azure Cosmos DB Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Azure Cosmos DB Emulator supports only secure communication through TLS connections.
-Certificates in the Azure Cosmos DB local emulator are generated the first time you run the emulator. There are two certificates. One of them is used to connect to the local emulator and the other is used to manage default encryption of the emulator data within the emulator. The certificate you want to export is the connection certificate with the friendly name "DocumentDBEmulatorCertificate".
+The first time you run the emulator, it generates two certificates. One of them is used to connect to the local emulator and the other is used to manage default encryption of the emulator data within the emulator. The certificate you want to export is the connection certificate with the friendly name `DocumentDBEmulatorCertificate`.
-When you use the emulator to develop apps in different languages such as Java, Python, or Node.js, you need to export the emulator certificate and import it into the required certificate store.
+When you use the emulator to develop apps in different languages, such as Java, Python, or Node.js, you need to export the emulator certificate and import it into the required certificate store.
-The .NET language and runtime uses the Windows Certificate Store to securely connect to the Azure Cosmos DB local emulator when the application is run on a Windows OS host. Other languages have their own method of managing and using certificates. For example, Java uses its own [certificate store](https://docs.oracle.com/cd/E19830-01/819-4712/ablqw/https://docsupdatetracker.net/index.html), Python uses [socket wrappers](https://docs.python.org/2/library/ssl.html), and Node.js uses [tlsSocket](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback).
+The .NET language and runtime uses the Windows Certificate Store to securely connect to the Azure Cosmos DB local emulator when the application is run on a Windows OS host. Other languages have their own method of managing and using certificates. Java uses its own [certificate store](https://docs.oracle.com/cd/E19830-01/819-4712/ablqw/https://docsupdatetracker.net/index.html), Python uses [socket wrappers](https://docs.python.org/2/library/ssl.html), and Node.js uses [tlsSocket](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback).
-This article demonstrates how to export the TLS/SSL certificates for use in different languages and runtime environments that do not integrate with the Windows Certificate Store. You can read more about the emulator in [Use the Azure Cosmos DB Emulator for development and testing](./local-emulator.md).
+This article demonstrates how to export the TLS/SSL certificates for use in different languages and runtime environments that don't integrate with the Windows Certificate Store. For more information about the emulator, see [Install and use the Azure Cosmos DB Emulator](./local-emulator.md).
## <a id="export-emulator-certificate"></a>Export the Azure Cosmos DB TLS/SSL certificate
-You need to export the emulator certificate to successfully use the emulator endpoint from languages and runtime environments that do not integrate with the Windows Certificate Store. You can export the certificate using the Windows Certificate Manager. Use the following step-by-step instructions to export the "DocumentDBEmulatorCertificate" certificate as a BASE-64 encoded X.509 (.cer) file:
+You need to export the emulator certificate to successfully use the emulator endpoint from languages and runtime environments that don't integrate with the Windows Certificate Store. You can export the certificate using the Windows Certificate Manager. After the first time you run the emulator, use the following procedure to export the `DocumentDBEmulatorCertificate` certificate as a *BASE-64 encoded X.509 (.cer)* file:
-1. Start the Windows Certificate manager by running certlm.msc and navigate to the Personal->Certificates folder and open the certificate with the friendly name **DocumentDbEmulatorCertificate**.
+1. Run *certlm.msc* to start the Windows Certificate manager. Navigate to the **Personal** > **Certificates** folder.
- :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-1.png" alt-text="Azure Cosmos DB local emulator export step 1":::
+1. Double-click the certificate with the friendly name **DocumentDbEmulatorCertificate** to open it.
-1. Click on **Details** then **OK**.
+ :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-1.png" alt-text="Screenshot shows the personal certificates in the Certificate Manager." lightbox="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-1.png":::
- :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-2.png" alt-text="Azure Cosmos DB local emulator export step 2":::
+1. Select the **Details** tab.
-1. Click **Copy to File...**.
+ :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-2.png" alt-text="Screenshot shows the General tab for the DocumentDBEmulatorCertificate certificate.":::
- :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-3.png" alt-text="Azure Cosmos DB local emulator export step 3":::
+1. Select **Copy to File**.
-1. Click **Next**.
+ :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-3.png" alt-text="Screenshot shows the Details tab for the DocumentDBEmulatorCertificate certificate where you can copy it to a file.":::
- :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-4.png" alt-text="Azure Cosmos DB local emulator export step 4":::
+1. In the Certificate Export Wizard, select **Next**.
-1. Click **No, do not export private key**, then click **Next**.
+ :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-4.png" alt-text="Screenshot shows the Certificate Export Wizard dialog.":::
- :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-5.png" alt-text="Azure Cosmos DB local emulator export step 5":::
+1. Choose **No, do not export private key**, then select **Next**.
-1. Click on **Base-64 encoded X.509 (.CER)** and then **Next**.
+ :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-5.png" alt-text="Screenshot shows the Export Private Key page.":::
- :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-6.png" alt-text="Azure Cosmos DB local emulator export step 6":::
+1. Select **Base-64 encoded X.509 (.CER)** and then **Next**.
-1. Give the certificate a name. In this case **documentdbemulatorcert** and then click **Next**.
+ :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-6.png" alt-text="Screenshot shows the Export File Format page.":::
- :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-7.png" alt-text="Azure Cosmos DB local emulator export step 7":::
+1. Give the certificate a name, in this case *documentdbemulatorcert*, and then select **Next**.
-1. Click **Finish**.
+ :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-7.png" alt-text="Screenshot shows the File to Export page where you enter a file name.":::
- :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-8.png" alt-text="Azure Cosmos DB local emulator export step 8":::
+1. Select **Finish**.
+
+ :::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-export-step-8.png" alt-text="Screenshot shows the Completing the Certificate Export Wizard where you select Finish.":::
## Use the certificate with Java apps
-When running Java applications or MongoDB applications that uses a Java based client, it is easier to install the certificate into the Java default certificate store than passing the `-Djavax.net.ssl.trustStore=<keystore> -Djavax.net.ssl.trustStorePassword="<password>"` flags. For example, the included Java Demo application (`https://localhost:8081/_explorer/https://docsupdatetracker.net/index.html`) depends on the default certificate store.
+When you run Java applications or MongoDB applications that use a Java based client, it's easier to install the certificate into the Java default certificate store than passing the `-Djavax.net.ssl.trustStore=<keystore> -Djavax.net.ssl.trustStorePassword="<password>"` parameters. For example, the included Java Demo application (`https://localhost:8081/_explorer/https://docsupdatetracker.net/index.html`) depends on the default certificate store.
-Follow the instructions in the [Adding a Certificate to the Java Certificates Store](https://docs.oracle.com/cd/E54932_01/doc.705/e54936/cssg_create_ssl_cert.htm) to import the X.509 certificate into the default Java certificate store. Keep in mind you will be working in the *%JAVA_HOME%* directory when running keytool. After the certificate is imported into the certificate store, clients for SQL and Azure Cosmos DB's API for MongoDB will be able to connect to the Azure Cosmos DB Emulator.
+Follow the instructions in the [Creating, Exporting, and Importing SSL Certificates](https://docs.oracle.com/cd/E54932_01/doc.705/e54936/cssg_create_ssl_cert.htm) to import the X.509 certificate into the default Java certificate store. Keep in mind that you work in the *%JAVA_HOME%* directory when running keytool. After the certificate is imported into the certificate store, clients for SQL and Azure Cosmos DB's API for MongoDB can connect to the Azure Cosmos DB Emulator.
-Alternatively you can run the following bash script to import the certificate:
+Alternatively, you can run the following bash script to import the certificate:
```bash #!/bin/bash
sudo $JAVA_HOME/bin/keytool -cacerts -delete -alias cosmos_emulator
sudo $JAVA_HOME/bin/keytool -cacerts -importcert -alias cosmos_emulator -file $EMULATOR_CERT_PATH ```
-Once the "CosmosDBEmulatorCertificate" TLS/SSL certificate is installed, your application should be able to connect and use the local Azure Cosmos DB Emulator. If you have any issues, you can follow the [Debugging SSL/TLS connections](https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/ReadDebug.html) article. In most cases, the certificate may not be installed into the *%JAVA_HOME%/jre/lib/security/cacerts* store. For example, if you have multiple installed versions of Java your application may be using a different cacerts store than the one you updated.
+Once the `CosmosDBEmulatorCertificate` TLS/SSL certificate is installed, your application should be able to connect and use the local Azure Cosmos DB Emulator.
+
+If you have any issues, see [Debugging SSL/TLS connections](https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/ReadDebug.html). In most cases, the certificate might not be installed into the *%JAVA_HOME%/jre/lib/security/cacerts* store. For example, if there's more than one installed version of Java, your application might be using a different certificate store than the one you updated.
## Use the certificate with Python apps
-When connecting to the emulator from Python apps, TLS verification is disabled. By default the [Python SDK](nosql/quickstart-python.md) for the API for NoSQL will not try to use the TLS/SSL certificate when connecting to the local emulator. If however you want to use TLS validation, you can follow the examples in the [Python socket wrappers](https://docs.python.org/3/library/ssl.html) documentation.
+When you connect to the emulator from Python apps, TLS verification is disabled. By default, the Python SDK for Azure Cosmos DB for NoSQL doesn't try to use the TLS/SSL certificate when it connects to the local emulator. For more information, see [Azure Cosmos DB for NoSQL client library for Python](nosql/quickstart-python.md).
+
+If you want to use TLS validation, you can follow the examples in [TLS/SSL wrapper for socket objects](https://docs.python.org/3/library/ssl.html).
## How to use the certificate in Node.js
-When connecting to the emulator from Node.js SDKs, TLS verification is disabled. By default the [Node.js SDK(version 1.10.1 or higher)](nosql/sdk-nodejs.md) for the API for NoSQL will not try to use the TLS/SSL certificate when connecting to the local emulator. If however you want to use TLS validation, you can follow the examples in the [Node.js documentation](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback).
+When you connect to the emulator from Node.js SDKs, TLS verification is disabled. By default, the [Node.js SDK(version 1.10.1 or higher)](nosql/sdk-nodejs.md) for the API for NoSQL doesn't try to use the TLS/SSL certificate when it connects to the local emulator. If you want to use TLS validation, follow the examples in the [Node.js documentation](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback).
## Rotate emulator certificates
-You can force regenerate the emulator certificates by selecting **Reset Data** from the Azure Cosmos DB Emulator running in the Windows Tray. Note that this action will also wipe out all the data stored locally by the emulator.
+You can force regenerate the emulator certificates by selecting **Reset Data** from the Azure Cosmos DB Emulator icon in the Windows Tray. This action also wipes out all the data stored locally by the emulator.
:::image type="content" source="./media/local-emulator-export-ssl-certificates/database-local-emulator-reset-data.png" alt-text="Azure Cosmos DB local emulator reset data":::
-If you have installed the certificate into the Java certificate store or used them elsewhere, you need to re-import them using the current certificates. Your application can't connect to the local emulator until you update the certificates.
+If you install the certificate into the Java certificate store or used them elsewhere, you need to reimport them using the current certificates. Your application can't connect to the local emulator until you update the certificates.
## Next steps
-* [Use command line parameters and PowerShell commands to control the emulator](emulator-command-line-parameters.md)
-* [Debug issues with the emulator](troubleshoot-local-emulator.md)
+* [Command-line and PowerShell reference for Azure Cosmos DB Emulator](emulator-command-line-parameters.md)
+* [Troubleshoot issues when using the Azure Cosmos DB Emulator](troubleshoot-local-emulator.md)
cosmos-db Connect Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-account.md
Title: Connect a MongoDB application to Azure Cosmos DB
-description: Learn how to connect a MongoDB app to Azure Cosmos DB by getting the connection string from Azure portal
+description: Learn how to connect a MongoDB app to Azure Cosmos DB by getting the connection string from Azure portal.
Previously updated : 08/26/2021 Last updated : 03/14/2023 adobe-target: true adobe-target-activity: DocsExp-A/B-384740-MongoDB-2.8.2021 adobe-target-experience: Experience B adobe-target-content: ./connect-mongodb-account-experimental + # Connect a MongoDB application to Azure Cosmos DB+ [!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)] Learn how to connect your MongoDB app to an Azure Cosmos DB by using a MongoDB connection string. You can then use an Azure Cosmos DB database as the data store for your MongoDB app. This tutorial provides two ways to retrieve connection string information: -- [The quickstart method](#get-the-mongodb-connection-string-by-using-the-quick-start), for use with .NET, Node.js, MongoDB Shell, Java, and Python drivers-- [The custom connection string method](#get-the-mongodb-connection-string-to-customize), for use with other drivers
+* [The quickstart method](#get-the-mongodb-connection-string-by-using-the-quick-start), for use with .NET, Node.js, MongoDB Shell, Java, and Python drivers.
+* [The custom connection string method](#get-the-mongodb-connection-string-to-customize), for use with other drivers.
## Prerequisites -- An Azure account. If you don't have an Azure account, create a [free Azure account](https://azure.microsoft.com/free/) now.-- An Azure Cosmos DB account. For instructions, see [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md).
+* An Azure account. If you don't have an Azure account, create a [free Azure account](https://azure.microsoft.com/free/) now.
+* An Azure Cosmos DB account. For instructions, see [Quickstart: Azure Cosmos DB for MongoDB driver for Node.js](create-mongodb-dotnet.md).
## Get the MongoDB connection string by using the quick start 1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the **Azure Cosmos DB** blade, select the API.
-3. In the left pane of the account blade, click **Quick start**.
-4. Choose your platform (**.NET**, **Node.js**, **MongoDB Shell**, **Java**, **Python**). If you don't see your driver or tool listed, don't worry--we continuously document more connection code snippets. Please comment below on what you'd like to see. To learn how to craft your own connection, read [Get the account's connection string information](#get-the-mongodb-connection-string-to-customize).
-5. Copy and paste the code snippet into your MongoDB app.
+1. In the **Azure Cosmos DB** pane, select the API.
+1. In the left pane of the account pane, select **Quick start**.
+1. Choose your platform (**.NET**, **Node.js**, **MongoDB Shell**, **Java**, **Python**). If you don't see your driver or tool listed, don't worry--we continuously document more connection code snippets. Comment on what you'd like to see. To learn how to craft your own connection, read [Get the account's connection string information](#get-the-mongodb-connection-string-to-customize).
+1. Copy and paste the code snippet into your MongoDB app.
- :::image type="content" source="./media/connect-account/quickstart-blade.png" alt-text="Quick start blade":::
+ :::image type="content" source="./media/connect-account/quickstart-pane.png" alt-text="Screenshot showing the Quick start pane.":::
## Get the MongoDB connection string to customize 1. In an Internet browser, sign in to the [Azure portal](https://portal.azure.com).
-2. In the **Azure Cosmos DB** blade, select the API.
-3. In the left pane of the account blade, click **Connection String**.
-4. The **Connection String** blade opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
+1. In the **Azure Cosmos DB** pane, select the API.
+1. In the left pane of the account pane, select **Connection strings**.
+1. The **Connection strings** pane opens. It has all the information necessary to connect to the account by using a driver for MongoDB, including a preconstructed connection string.
- :::image type="content" source="./media/connect-account/connection-string-blade.png" alt-text="Connection String blade" lightbox= "./media/connect-account/connection-string-blade.png" :::
+ :::image type="content" source="./media/connect-account/connection-string-pane.png" alt-text="Screenshot showing the Connection strings pane." lightbox="./media/connect-account/connection-string-pane.png" :::
## Connection string requirements
-> [!Important]
+> [!IMPORTANT]
> Azure Cosmos DB has strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication via *TLS*.
-Azure Cosmos DB supports the standard MongoDB connection string URI format, with a couple of specific requirements: Azure Cosmos DB accounts require authentication and secure communication via TLS. So, the connection string format is:
+Azure Cosmos DB supports the standard MongoDB connection string URI format, with a couple of specific requirements: Azure Cosmos DB accounts require authentication and secure communication via TLS. The connection string format is:
`mongodb://username:password@host:port/[database]?ssl=true`
-The values of this string are available in the **Connection String** blade shown earlier:
+The values of this string are:
* Username (required): Azure Cosmos DB account name. * Password (required): Azure Cosmos DB account password. * Host (required): FQDN of the Azure Cosmos DB account. * Port (required): 10255. * Database (optional): The database that the connection uses. If no database is provided, the default database is "test."
-* ssl=true (required)
+* ssl=true (required).
-For example, consider the account shown in the **Connection String** blade. A valid connection string is:
+For example, consider the account shown in the **Connection strings** pane. A valid connection string is:
`mongodb://contoso123:0Fc3IolnL12312asdfawejunASDF@asdfYXX2t8a97kghVcUzcDv98hawelufhawefafnoQRGwNj2nMPL1Y9qsIr9Srdw==@contoso123.documents.azure.com:10255/mydatabase?ssl=true` ## Driver Requirements
-All drivers that support wire protocol version 3.4 or greater will support Azure Cosmos DB for MongoDB.
+All drivers that support wire protocol version 3.4 or greater support Azure Cosmos DB for MongoDB.
Specifically, client drivers must support the Service Name Identification (SNI) TLS extension and/or the appName connection string option. If the `appName` parameter is provided, it must be included as found in the connection string value in the Azure portal. ## Next steps -- Learn how to [use Studio 3T](connect-using-mongochef.md) with Azure Cosmos DB's API for MongoDB.-- Learn how to [use Robo 3T](connect-using-robomongo.md) with Azure Cosmos DB's API for MongoDB.-- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB's API for MongoDB.-- Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- - If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md)
+* [Connect to an Azure Cosmos DB account using Studio 3T](connect-using-mongochef.md).
+* [Use Robo 3T with Azure Cosmos DB's API for MongoDB](connect-using-robomongo.md)
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-query.md
Title: Query data with Azure Cosmos DB's API for MongoDB
-description: Learn how to query data from Azure Cosmos DB's API for MongoDB by using MongoDB shell commands
+ Title: Query data with Azure Cosmos DB for MongoDB
+description: Learn how to query data from Azure Cosmos DB for MongoDB by using MongoDB shell commands.
Previously updated : 12/03/2019 Last updated : 03/14/2023
-# Query data by using Azure Cosmos DB's API for MongoDB
+# Query data by using Azure Cosmos DB for MongoDB
[!INCLUDE[MongoDB](../includes/appliesto-mongodb.md)]
-The [Azure Cosmos DB's API for MongoDB](introduction.md) supports [MongoDB queries](https://docs.mongodb.com/manual/tutorial/query-documents/).
+The [Azure Cosmos DB for MongoDB](introduction.md) supports [MongoDB queries](https://docs.mongodb.com/manual/tutorial/query-documents/).
-This article covers the following tasks:
+This article covers the following tasks:
> [!div class="checklist"] > * Querying data stored in your Azure Cosmos DB database using MongoDB shell
-You can get started by using the examples in this document and watch the [Query Azure Cosmos DB with MongoDB shell](https://azure.microsoft.com/resources/videos/query-azure-cosmos-db-data-by-using-the-mongodb-shell/) video .
+You can get started by using the examples in this article.
## Sample document
The queries in this article use the following sample document.
"isRegistered": false } ```
-## <a id="examplequery1"></a> Example query 1
-Given the sample family document above, the following query returns the documents where the id field matches `WakefieldFamily`.
+## <a id="examplequery1"></a> Example query 1
-**Query**
+Given the sample family document, the following query returns the documents where the `id` field matches `WakefieldFamily`.
+
+Query:
```bash db.families.find({ id: "WakefieldFamily"}) ```
-**Results**
+Results:
```json {
db.families.find({ id: "WakefieldFamily"})
} ```
-## <a id="examplequery2"></a>Example query 2
+## <a id="examplequery2"></a>Example query 2
-The next query returns all the children in the family.
+The next query returns all the children in the family.
-**Query**
+Query:
-```bash
+```bash
db.families.find( { id: "WakefieldFamily" }, { children: true } )
-```
+```
-**Results**
+Results:
```json {
db.families.find( { id: "WakefieldFamily" }, { children: true } )
} ```
-## <a id="examplequery3"></a>Example query 3
+## <a id="examplequery3"></a>Example query 3
-The next query returns all the families that are registered.
+The next query returns all the families that are registered.
-**Query**
+Query:
```bash db.families.find( { "isRegistered" : true })
-```
+```
-**Results**
+Results:
-No document will be returned.
+No document is returned.
## <a id="examplequery4"></a>Example query 4
-The next query returns all the families that are not registered.
+The next query returns all the families that aren't registered.
-**Query**
+Query:
```bash db.families.find( { "isRegistered" : false })
-```
+```
-**Results**
+Results:
```json {
- "_id": ObjectId("58f65e1198f3a12c7090e68c"),
- "id": "WakefieldFamily",
- "parents": [{
- "familyName": "Wakefield",
- "givenName": "Robin"
- }, {
- "familyName": "Miller",
- "givenName": "Ben"
- }],
- "children": [{
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [{
- "givenName": "Goofy"
- }, {
- "givenName": "Shadow"
- }]
- }, {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }],
- "address": {
- "state": "NY",
- "county": "Manhattan",
- "city": "NY"
- },
- "creationDate": 1431620462,
- "isRegistered": false
+ "_id": ObjectId("58f65e1198f3a12c7090e68c"),
+ "id": "WakefieldFamily",
+ "parents": [{
+ "familyName": "Wakefield",
+ "givenName": "Robin"
+ }, {
+ "familyName": "Miller",
+ "givenName": "Ben"
+ }],
+ "children": [{
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [{
+ "givenName": "Goofy"
+ }, {
+ "givenName": "Shadow"
+ }]
+ }, {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }],
+ "address": {
+ "state": "NY",
+ "county": "Manhattan",
+ "city": "NY"
+ },
+ "creationDate": 1431620462,
+ "isRegistered": false
} ``` ## <a id="examplequery5"></a>Example query 5
-The next query returns all the families that are not registered and state is NY.
+The next query returns all the families that aren't registered and state is NY.
-**Query**
+Query:
```bash db.families.find( { "isRegistered" : false, "address.state" : "NY" })
-```
+```
-**Results**
+Results:
```json {
- "_id": ObjectId("58f65e1198f3a12c7090e68c"),
- "id": "WakefieldFamily",
- "parents": [{
- "familyName": "Wakefield",
- "givenName": "Robin"
- }, {
- "familyName": "Miller",
- "givenName": "Ben"
- }],
- "children": [{
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [{
- "givenName": "Goofy"
- }, {
- "givenName": "Shadow"
- }]
- }, {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }],
- "address": {
- "state": "NY",
- "county": "Manhattan",
- "city": "NY"
- },
- "creationDate": 1431620462,
- "isRegistered": false
+ "_id": ObjectId("58f65e1198f3a12c7090e68c"),
+ "id": "WakefieldFamily",
+ "parents": [{
+ "familyName": "Wakefield",
+ "givenName": "Robin"
+ }, {
+ "familyName": "Miller",
+ "givenName": "Ben"
+ }],
+ "children": [{
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [{
+ "givenName": "Goofy"
+ }, {
+ "givenName": "Shadow"
+ }]
+ }, {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }],
+ "address": {
+ "state": "NY",
+ "county": "Manhattan",
+ "city": "NY"
+ },
+ "creationDate": 1431620462,
+ "isRegistered": false
} ```
db.families.find( { "isRegistered" : false, "address.state" : "NY" })
The next query returns all the families where children grades are 8.
-**Query**
+Query:
```bash db.families.find( { children : { $elemMatch: { grade : 8 }} } ) ```
-**Results**
+Results:
```json {
- "_id": ObjectId("58f65e1198f3a12c7090e68c"),
- "id": "WakefieldFamily",
- "parents": [{
- "familyName": "Wakefield",
- "givenName": "Robin"
- }, {
- "familyName": "Miller",
- "givenName": "Ben"
- }],
- "children": [{
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1,
- "pets": [{
- "givenName": "Goofy"
- }, {
- "givenName": "Shadow"
- }]
- }, {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
- }],
- "address": {
- "state": "NY",
- "county": "Manhattan",
- "city": "NY"
- },
- "creationDate": 1431620462,
- "isRegistered": false
+ "_id": ObjectId("58f65e1198f3a12c7090e68c"),
+ "id": "WakefieldFamily",
+ "parents": [{
+ "familyName": "Wakefield",
+ "givenName": "Robin"
+ }, {
+ "familyName": "Miller",
+ "givenName": "Ben"
+ }],
+ "children": [{
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1,
+ "pets": [{
+ "givenName": "Goofy"
+ }, {
+ "givenName": "Shadow"
+ }]
+ }, {
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
+ }],
+ "address": {
+ "state": "NY",
+ "county": "Manhattan",
+ "city": "NY"
+ },
+ "creationDate": 1431620462,
+ "isRegistered": false
} ```
db.families.find( { children : { $elemMatch: { grade : 8 }} } )
The next query returns all the families where size of children array is 3.
-**Query**
+Query:
```bash db.Family.find( {children: { $size:3} } ) ```
-**Results**
+Results:
-No results will be returned as there are no families with more than two children. Only when parameter is 2 this query will succeed and return the full document.
+No results are returned because there are no families with more than two children. Only when parameter value is `2` does this query succeed and return the full document.
## Next steps
-In this tutorial, you've done the following:
+In this tutorial, you've done the following tasks:
> [!div class="checklist"]
-> * Learned how to query using Azure Cosmos DBΓÇÖs API for MongoDB
+> * Learned how to query using Azure Cosmos DB for MongoDB
You can now proceed to the next tutorial to learn how to distribute your data globally.
cosmos-db Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor.md
Title: Monitor Azure Cosmos DB | Microsoft Docs
-description: Learn how to monitor the performance and availability of Azure Cosmos DB.
+ Title: Monitor Azure Cosmos DB
+description: Learn how to monitor the performance and availability of Azure Cosmos DB. You can monitor your data with client-side and server-side metrics.
Previously updated : 05/03/2020 Last updated : 03/08/2023 # Monitor Azure Cosmos DB [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Cosmos DB databases and how you can use the features of Azure Monitor to analyze and alert on this data.
+When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for availability, performance, and operation. This article describes the monitoring data generated by Azure Cosmos DB databases and how you can use the features of Azure Monitor to analyze and alert on this data.
You can monitor your data with client-side and server-side metrics. When using server-side metrics, you can monitor the data stored in Azure Cosmos DB with the following options:
-* **Monitor from Azure Cosmos DB portal:** You can monitor with the metrics available within the **Metrics** tab of the Azure Cosmos DB account. The metrics on this tab include throughput, storage, availability, latency, consistency, and system level metrics. By default, these metrics have a retention period of seven days. To learn more, see the [Monitoring data collected from Azure Cosmos DB](#monitoring-data) section of this article.
+- **Monitor from Azure Cosmos DB portal:** You can monitor with the metrics available within the **Metrics** tab of the Azure Cosmos DB account. The metrics on this tab include throughput, storage, availability, latency, consistency, and system level metrics. By default, these metrics have a retention period of seven days. To learn more, see the [Monitoring data collected from Azure Cosmos DB](#monitoring-data) section of this article.
-* **Monitor with metrics in Azure monitor:** You can monitor the metrics of your Azure Cosmos DB account and create dashboards from the Azure Monitor. Azure Monitor collects the Azure Cosmos DB metrics by default, you will not need to explicitly configure anything. These metrics are collected with one-minute granularity, the granularity may vary based on the metric you choose. By default, these metrics have a retention period of 30 days. Most of the metrics that are available from the previous options are also available in these metrics. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn more, see the [Analyze metric data](#analyzing-metrics) section of this article.
+- **Monitor with metrics in Azure monitor:** You can monitor the metrics of your Azure Cosmos DB account and create dashboards from the Azure Monitor. Monitor collects the Azure Cosmos DB metrics by default. You don't need to explicitly configure anything. These metrics are collected with one-minute granularity. The granularity might vary based on the metric you choose. By default, these metrics have a retention period of 30 days.
-* **Monitor with diagnostic logs in Azure Monitor:** You can monitor the logs of your Azure Cosmos DB account and create dashboards from the Azure Monitor. Data such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container changes, the properties of an Azure Cosmos DB account are changed, and these events are captured within the logs. You can analyze these logs by running queries on the gathered data. To learn more, see the [Analyze log data](#analyzing-logs) section of this article.
+ Most of the metrics that are available from the previous options are also available in these metrics. The dimension values for the metrics, such as container name, are case insensitive. Use case insensitive comparison when doing string comparisons on these dimension values. To learn more, see the [Analyze metric data](#analyzing-metrics) section of this article.
-* **Monitor programmatically with SDKs:** You can monitor your Azure Cosmos DB account programmatically by using the .NET, Java, Python, Node.js SDKs, and the headers in REST API. To learn more, see the [Monitoring Azure Cosmos DB programmatically](#monitor-azure-cosmos-db-programmatically) section of this article.
+- **Monitor with diagnostic logs in Azure Monitor:** You can monitor the logs of your Azure Cosmos DB account and create dashboards from the Monitor. Data such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container changes, the properties of an Azure Cosmos DB account change. The logs capture these events. You can analyze these logs by running queries on the gathered data. To learn more, see the [Analyze log data](#analyzing-logs) section of this article.
-The following image shows different options available to monitor Azure Cosmos DB account through Azure portal:
+- **Monitor programmatically with SDKs:** You can monitor your Azure Cosmos DB account programmatically by using the .NET, Java, Python, Node.js SDKs, and the headers in REST API. To learn more, see the [Monitoring Azure Cosmos DB programmatically](#monitor-azure-cosmos-db-programmatically) section of this article.
+The following image shows different options available to monitor Azure Cosmos DB account through the Azure portal:
-When using Azure Cosmos DB, at the client-side you can collect the details for request charge, activity ID, exception/stack trace information, HTTP status/sub-status code, diagnostic string to debug any issue that might occur. This information is also required if you need to reach out to the Azure Cosmos DB support team.
+
+On the client-side, you can collect the details for request charge, activity ID, exception and stack trace information, HTTP status and substatus code, and diagnostic string to debug issues that might occur. You need this information to reach out to the Azure Cosmos DB support team.
## Monitor overview
-The **Overview** page in the Azure portal for each Azure Cosmos DB account includes a brief view of the resource usage, such as total requests, requests that resulted in a specific HTTP status code, and hourly billing. This information is helpful, however only a small amount of the monitoring data is available from this pane. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable other types of data collection with some configuration.
+The **Overview** page in the Azure portal for each Azure Cosmos DB account includes a brief view of the resource usage, such as total requests, requests that resulted in a specific HTTP status code, and hourly billing. This information is helpful. It's only a small amount of the monitoring data is available from this pane. Some of this data is collected automatically. It's available for analysis as soon as you create the resource. You can enable other types of data collection after some configuration.
## What is Azure Monitor?
-Azure Cosmos DB creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
+Azure Cosmos DB creates monitoring data using [Azure Monitor](../azure-monitor/overview.md). Monitor is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
-If you're not already familiar with monitoring Azure services, start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
+If you're not already familiar with monitoring Azure services, start with [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
-* What is Azure Monitor?
-* Costs associated with monitoring
-* Monitoring data collected in Azure
-* Configuring data collection
-* Standard tools in Azure for analyzing and alerting on monitoring data
+- What is Azure Monitor?
+- Costs associated with monitoring
+- Monitoring data collected in Azure
+- Configuring data collection
+- Standard tools in Azure for analyzing and alerting on monitoring data
-The following sections build on this article by describing the specific data gathered from Azure Cosmos DB and providing examples for configuring data collection and analyzing this data with Azure tools.
+The following sections build on this article. They describe the specific data gathered from Azure Cosmos DB and provide examples for configuring data collection and analyzing this data with Azure tools.
## Azure Cosmos DB insights
-Azure Cosmos DB insights is a feature based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) and uses the same monitoring data collected for Azure Cosmos DB described in the sections below. Use Azure Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience, and use the other features of Azure Monitor for detailed analysis and alerting. To learn more, see the [Explore Azure Cosmos DB insights](insights-overview.md) article.
+Azure Cosmos DB insights is a feature based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) and uses the same monitoring data collected for Azure Cosmos DB described in the following sections. Use Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. Use the other features of Monitor for detailed analysis and alerting. To learn more, see [Explore Azure Cosmos DB insights](insights-overview.md).
> [!NOTE]
-> When creating containers, make sure you donΓÇÖt create two containers with the same name but different casing. ThatΓÇÖs because some parts of the Azure platform are not case-sensitive, and this can result in confusion/collision of telemetry and actions on containers with such names.
+> When creating containers, make sure you don't create two containers with the same name but different casing. Some parts of the Azure platform are not case sensitive. This situation can result in confusion or collision of telemetry and actions on containers with such names.
-## Monitoring data
+## Monitoring data
-Azure Cosmos DB collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). See [Azure Cosmos DB monitoring data reference](monitor-reference.md) for a detailed reference of the logs and metrics created by Azure Cosmos DB.
+Azure Cosmos DB collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). For a detailed reference of the logs and metrics created by Azure Cosmos DB, see [Azure Cosmos DB monitoring data reference](monitor-reference.md).
-The **Overview** page in the Azure portal for each Azure Cosmos DB database includes a brief view of the database usage including its request and hourly billing usage. This is useful information but only a small amount of the monitoring data available. Some of this data is collected automatically and available for analysis as soon as you create the database while you can enable more data collection with some configuration.
+The **Overview** page in the Azure portal for each Azure Cosmos DB database includes a brief view of the database usage including its request and hourly billing usage. This information is useful. It's only a small amount of the monitoring data available. Some of this data is collected automatically and available for analysis as soon as you create the database. You can enable more data collection after some configuration.
## Collection and routing
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+Platform metrics and the Activity log are collected and stored automatically. Use a diagnostic setting to route it to other locations.
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+Resource logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](monitor-resource-logs.md) for the detailed process for creating a diagnostic setting using the Azure portal and some diagnostic query examples. When you create a diagnostic setting, you specify which categories of logs to collect.
+To learn about the process to create a diagnostic setting using the Azure portal and diagnostic query examples, see [Create diagnostic setting to collect platform logs and metrics in Azure](monitor-resource-logs.md). When you create a diagnostic setting, you specify which categories of logs to collect.
-The metrics and logs you can collect are discussed in the following sections.
+The following sections discuss the metrics and logs you can collect.
## Analyzing metrics
-Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. You can also check out how to monitor [server-side latency](monitor-server-side-latency.md), [request unit usage](monitor-request-unit-usage.md), and [normalized request unit usage](monitor-normalized-request-units.md) for your Azure Cosmos DB resources.
+Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening **Metrics** from the **Azure Monitor** menu. For more information about this tool, see [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md).
+
+You can also monitor [server-side latency](monitor-server-side-latency.md), [request unit usage](monitor-request-unit-usage.md), and [normalized request unit usage](monitor-normalized-request-units.md) for your Azure Cosmos DB resources.
-For a list of the platform metrics collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference metrics](monitor-reference.md#metrics) article.
+For a list of the platform metrics collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference metrics](monitor-reference.md#metrics).
All metrics for Azure Cosmos DB are in the namespace **Azure Cosmos DB standard metrics**. You can use the following dimensions with these metrics when adding a filter to a chart:
-* CollectionName
-* DatabaseName
-* OperationType
-* Region
-* StatusCode
+- CollectionName
+- DatabaseName
+- OperationType
+- Region
+- StatusCode
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+You can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
### View operation level metrics for Azure Cosmos DB 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Select **Monitor** from the left-hand navigation bar, and select **Metrics**.
+1. Select **Monitor** from the left navigation bar, and select **Metrics**.
+
+ :::image type="content" source="./media/monitor/monitor-metrics-blade.png" alt-text="Screenshot shows the Metrics pane in Azure Monitor.":::
- :::image type="content" source="./media/monitor/monitor-metrics-blade.png" alt-text="Metrics pane in Azure Monitor":::
+1. In the **Select a scope** pane, select a **Subscription**. You can narrow the scopes by **Resource type** and **Locations**. Select **Azure Cosmos DB accounts** to quickly find your account.
-1. From the **Metrics** pane > **Select a resource** > choose the required **subscription**, and **resource group**. For the **Resource type**, select **Azure Cosmos DB accounts**, choose one of your existing Azure Cosmos DB accounts, and select **Apply**.
+ :::image type="content" source="./media/monitor/select-cosmosdb-account.png" alt-text="Screenshot shows the Select a resource pane in Metrics.":::
- :::image type="content" source="./media/monitor/select-cosmosdb-account.png" alt-text="Choose an Azure Cosmos DB account to view metrics":::
+1. Select an Azure Cosmos DB account, and choose **Apply**.
-1. Next you can select a metric from the list of available metrics. You can select metrics specific to request units, storage, latency, availability, Cassandra, and others. To learn in detail about all the available metrics in this list, see the [Metrics by category](monitor-reference.md) article. In this example, let's select **Request units** and **Avg** as the aggregation value.
+1. Next, select a metric from the list of available metrics. You can select metrics specific to request units, storage, latency, availability, Cassandra, and others. For more information about the available metrics, see [Metrics by category](monitor-reference.md). In this example, select **Total request units** and **Avg** as the aggregation value.
- In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the average number of request units consumed per minute for the selected period.
+ :::image type="content" source="./media/monitor/metric-types.png" alt-text="Screenshot shows the option to choose a metric from the Azure portal.":::
- :::image type="content" source="./media/monitor/metric-types.png" alt-text="Choose a metric from the Azure portal":::
+ In addition to these details, you can also select the **Time range** and **Time granularity** of the metrics. At the most, you can view metrics for the past 30 days.
+
+After you apply the filter, the Azure portal displays a chart based on your filter. You can see the average number of request units consumed per minute for the selected period.
### Add filters to metrics
-You can also filter metrics and the chart displayed by a specific **CollectionName**, **DatabaseName**, **OperationType**, **Region**, and **StatusCode**. To filter the metrics, select **Add filter** and choose the required property such as **OperationType** and select a value such as **Query**. The graph then displays the request units consumed for the query operation for the selected period. The operations executed via Stored procedure aren't logged so they aren't available under the OperationType metric.
+You can also filter metrics and the chart displayed by properties:
+
+- CapacityType
+- CollectionName
+- DatabaseName
+- OperationType
+- Region
+- Status
+- StatusCode
+To filter the metrics, select **Add filter** and choose the required property, such as **OperationType**. Then select a value, such as **Query**.
-You can group metrics by using the **Apply splitting** option. For example, you can group the request units per operation type and view the graph for all the operations at once as shown in the following image:
+The graph then displays the request units consumed for the query operation for the selected period. The operations run by using a stored procedure aren't logged. They aren't available under the OperationType metric.
+
+### Group metrics
+
+You can group metrics by using the **Apply splitting** option. For example, you can group the request units per operation type and view the graph for all the operations at once, as shown in the following image:
+ ## Analyzing logs
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+Data in Azure Monitor Logs is stored in tables. Each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). For a list of the types of resource logs collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). For the types of resource logs collected for Azure Cosmos DB, see [Monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs).
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform that provides insight into subscription-level events. You can view it independently or route it to Monitor Logs, where you can do much more complex queries using Log Analytics.
Azure Cosmos DB stores data in the following tables.
Azure Cosmos DB stores data in the following tables.
### Sample Kusto queries
-Prior to using Log Analytics to issue Kusto queries, you must [enable diagnostic logs for control plane operations](./audit-control-plane-logs.md#enable-diagnostic-logs-for-control-plane-operations). When enabling diagnostic logs, you will select between storing your data in a single [AzureDiagnostics table (legacy)](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode) or [resource-specific tables](../azure-monitor/essentials/resource-logs.md#resource-specific).
+Prior to using Log Analytics to issue Kusto queries, you must [enable diagnostic logs for control plane operations](./audit-control-plane-logs.md#enable-diagnostic-logs-for-control-plane-operations). When you enable diagnostic logs, select between storing your data in a single [AzureDiagnostics table (legacy)](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode) or [resource-specific tables](../azure-monitor/essentials/resource-logs.md#resource-specific).
-When you select **Logs** from the Azure Cosmos DB menu, Log Analytics is opened with the query scope set to the current Azure Cosmos DB account. Log queries will only include data from that resource.
+When you select **Logs** from the Azure Cosmos DB menu, Log Analytics is opened with the query scope set to the current Azure Cosmos DB account. Log queries only include data from that resource.
> [!IMPORTANT] > If you want to run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md).
-Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos DB resources. The exact text of the queries will depend on the [collection mode](../azure-monitor/essentials/resource-logs.md#select-the-collection-mode) you selected when you enabled diagnostics logs.
+Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos DB resources. The exact text of the queries depends on the [collection mode](../azure-monitor/essentials/resource-logs.md#select-the-collection-mode) you selected when you enabled diagnostics logs.
#### [AzureDiagnostics table (legacy)](#tab/azure-diagnostics)
-* To query for all control-plane logs from Azure Cosmos DB:
+To query for all control-plane logs from Azure Cosmos DB:
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="ControlPlaneRequests"
- ```
+```kusto
+AzureDiagnostics
+| where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+| where Category=="ControlPlaneRequests"
+```
-* To query for all data-plane logs from Azure Cosmos DB:
+To query for all data-plane logs from Azure Cosmos DB:
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="DataPlaneRequests"
- ```
+```kusto
+AzureDiagnostics
+| where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+| where Category=="DataPlaneRequests"
+```
-* To query for a filtered list of data-plane logs, specific to a single resource:
+To query for a filtered list of data-plane logs, specific to a single resource:
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="DataPlaneRequests"
- | where Resource=="<account-name>"
- ```
+```kusto
+AzureDiagnostics
+| where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+| where Category=="DataPlaneRequests"
+| where Resource=="<account-name>"
+```
- > [!IMPORTANT]
- > In the **AzureDiagnostics** table, many fields are case-sensitive and uppercase including, but not limited to; *ResourceId*, *ResourceGroup*, *ResourceProvider*, and *Resource*.
+> [!IMPORTANT]
+> In the **AzureDiagnostics** table, many fields are case sensitive and uppercase including, but not limited to *ResourceId*, *ResourceGroup*, *ResourceProvider*, and *Resource*.
-* To get a count of data-plane logs, grouped by resource:
+To get a count of data-plane logs, grouped by resource:
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="DataPlaneRequests"
- | summarize count() by Resource
- ```
+```kusto
+AzureDiagnostics
+| where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+| where Category=="DataPlaneRequests"
+| summarize count() by Resource
+```
-* To generate a chart for data-plane logs, grouped by the type of operation:
+To generate a chart for data-plane logs, grouped by the type of operation:
- ```kusto
- AzureDiagnostics
- | where ResourceProvider=="MICROSOFT.DOCUMENTDB"
- | where Category=="DataPlaneRequests"
- | summarize count() by OperationName
- | render columnchart
- ```
+```kusto
+AzureDiagnostics
+| where ResourceProvider=="MICROSOFT.DOCUMENTDB"
+| where Category=="DataPlaneRequests"
+| summarize count() by OperationName
+| render columnchart
+```
#### [Resource-specific table](#tab/resource-specific-diagnostics)
-* To query for all control-plane logs from Azure Cosmos DB:
+To query for all control-plane logs from Azure Cosmos DB:
- ```kusto
- CDBControlPlaneRequests
- ```
+```kusto
+CDBControlPlaneRequests
+```
-* To query for all data-plane logs from Azure Cosmos DB:
+To query for all data-plane logs from Azure Cosmos DB:
- ```kusto
- CDBDataPlaneRequests
- ```
+```kusto
+CDBDataPlaneRequests
+```
-* To query for a filtered list of data-plane logs, specific to a single resource:
+To query for a filtered list of data-plane logs, specific to a single resource:
- ```kusto
- CDBDataPlaneRequests
- | where AccountName=="<account-name>"
- ```
+```kusto
+CDBDataPlaneRequests
+| where AccountName=="<account-name>"
+```
-* To get a count of data-plane logs, grouped by resource:
+To get a count of data-plane logs, grouped by resource:
- ```kusto
- CDBDataPlaneRequests
- | summarize count() by AccountName
- ```
+```kusto
+CDBDataPlaneRequests
+| summarize count() by AccountName
+```
-* To generate a chart for data-plane logs, grouped by the type of operation:
+To generate a chart for data-plane logs, grouped by the type of operation:
- ```kusto
- CDBDataPlaneRequests
- | summarize count() by OperationName
- | render piechart
- ```
+```kusto
+CDBDataPlaneRequests
+| summarize count() by OperationName
+| render piechart
+```
-These examples are just a small sampling of the rich queries that can be performed in Azure Monitor using the Kusto Query Language. For more information, see [samples for Kusto queries](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor).
+These examples are just a small sampling of the rich queries that can be performed in Monitor using the Kusto Query Language. For more information, see [samples for Kusto queries](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor).
## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
+Azure Monitor alerts proactively notify you when Monitor finds important conditions in your monitoring data. Alerts allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
For example, the following table lists few alert rules for your resources. You can find a detailed list of alert rules from the Azure portal. To learn more, see [how to configure alerts](create-alerts.md) article. | Alert type | Condition | Description | |:|:|:|
-|Rate limiting on request units (metric alert) |Dimension name: StatusCode, Operator: Equals, Dimension values: 429 | Alerts if the container or a database has exceeded the provisioned throughput limit. |
-|Region failed over |Operator: Greater than, Aggregation type: Count, Threshold value: 1 | When a single region is failed over. This alert is helpful if you didn't enable service-managed failover. |
-| Rotate keys(activity log alert)| Event level: Informational, Status: started| Alerts when the account keys are rotated. You can update your application with the new keys. |
+|Rate limiting on request units (metric alert) |Dimension name: *StatusCode*, Operator: *Equals*, Dimension values: 429 | Alerts if the container or a database has exceeded the provisioned throughput limit. |
+|Region failed over |Operator: *Greater than*, Aggregation type: *Count*, Threshold value: 1 | When a single region is failed over. This alert is helpful if you didn't enable service-managed failover. |
+| Rotate keys (activity log alert)| Event level: *Informational*, *Status*: started| Alerts when the account keys are rotated. You can update your application with the new keys. |
## Monitor Azure Cosmos DB programmatically
-The account level metrics available in the portal, such as account storage usage and total requests, aren't available via the API for NoSQL. However, you can retrieve usage data at the collection level by using the API for NoSQL. To retrieve collection level data, do the following:
+The account level metrics available in the portal, such as account storage usage and total requests, aren't available by using the API for NoSQL. However, you can retrieve usage data at the collection level by using the API for NoSQL. To retrieve collection level data, use one of the following approaches:
-* To use the REST API, [perform a GET on the collection](/rest/api/cosmos-db/get-a-collection). The quota and usage information for the collection is returned in the x-ms-resource-quota and x-ms-resource-usage headers in the response.
+- To use the REST API, [perform a GET on the collection](/rest/api/cosmos-db/get-a-collection). The quota and usage information for the collection is returned in the `x-ms-resource-quota` and `x-ms-resource-usage` headers in the response.
-* To use the .NET SDK, use the [DocumentClient.ReadDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentcollectionasync) method, which returns a [ResourceResponse](/dotnet/api/microsoft.azure.documents.client.resourceresponse-1) that contains many usage properties such as **CollectionSizeUsage**, **DatabaseUsage**, **DocumentUsage**, and more.
+- To use the .NET SDK, use the [DocumentClient.ReadDocumentCollectionAsync](/dotnet/api/microsoft.azure.documents.client.documentclient.readdocumentcollectionasync) method, which returns a [ResourceResponse](/dotnet/api/microsoft.azure.documents.client.resourceresponse-1) that contains many usage properties such as **CollectionSizeUsage**, **DatabaseUsage**, and **DocumentUsage**.
-To access more metrics, use the [Azure Monitor SDK](https://www.nuget.org/packages/Microsoft.Azure.Insights). Available metric definitions can be retrieved by calling:
+To access more metrics, use the [Azure Monitor SDK](https://www.nuget.org/packages/Microsoft.Azure.Insights). Available metric definitions can be retrieved by using this format:
```http https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01
To retrieve individual metrics, use the following format:
https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metrics?timespan={StartTime}/{EndTime}&interval={AggregationInterval}&metricnames={MetricName}&aggregation={AggregationType}&`$filter={Filter}&api-version=2018-01-01 ```
-To learn more, see the [Azure monitoring REST API](../azure-monitor/essentials/rest-api-walkthrough.md) article.
+To learn more, see [Azure monitoring REST API](../azure-monitor/essentials/rest-api-walkthrough.md).
## Next steps
-* See [Azure Cosmos DB monitoring data reference](monitor-reference.md) for a reference of the logs and metrics created by Azure Cosmos DB.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- [Monitoring Azure Cosmos DB data reference](monitor-reference.md)
+- [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md)
cosmos-db Bulk Executor Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bulk-executor-dotnet.md
Title: Use bulk executor .NET library in Azure Cosmos DB for bulk import and update operations
-description: Bulk import and update the Azure Cosmos DB documents using the bulk executor .NET library.
+description: Learn how to bulk import and update the Azure Cosmos DB documents using the bulk executor .NET library.
ms.devlang: csharp Previously updated : 05/02/2020 Last updated : 03/15/2023 # Use the bulk executor .NET library to perform bulk operations in Azure Cosmos DB+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!NOTE]
-> This bulk executor library described in this article is maintained for applications using the .NET SDK 2.x version. For new applications, you can use the **bulk support** that is directly available with the [.NET SDK version 3.x](tutorial-dotnet-bulk-import.md) and it does not require any external library.
-
-> If you are currently using the bulk executor library and planning to migrate to bulk support on the newer SDK, use the steps in the [Migration guide](how-to-migrate-from-bulk-executor-library.md) to migrate your application.
+> The bulk executor library described in this article is maintained for applications that use the .NET SDK 2.x version. For new applications, you can use the **bulk support** that's directly available with the [.NET SDK version 3.x](tutorial-dotnet-bulk-import.md), and it doesn't require any external library.
+>
+> If you currently use the bulk executor library and plan to migrate to bulk support on the newer SDK, use the steps in the [Migration guide](how-to-migrate-from-bulk-executor-library.md) to migrate your application.
-This tutorial provides instructions on using the bulk executor .NET library to import and update documents to an Azure Cosmos DB container. To learn about the bulk executor library and how it helps you use massive throughput and storage, see the [bulk executor library overview](../bulk-executor-overview.md) article. In this tutorial, you'll see a sample .NET application that bulk imports randomly generated documents into an Azure Cosmos DB container. After importing the data, the library shows you how you can bulk update the imported data by specifying patches as operations to perform on specific document fields.
+This tutorial provides instructions on how to use the bulk executor .NET library to import and update documents to an Azure Cosmos DB container. To learn about the bulk executor library and how it helps you use massive throughput and storage, see the [Azure Cosmos DB bulk executor library overview](../bulk-executor-overview.md). In this tutorial, you see a sample .NET application where bulk imports randomly generated documents into an Azure Cosmos DB container. After you import the data, the library shows you how you can bulk update the imported data by specifying patches as operations to perform on specific document fields.
-Currently, bulk executor library is supported by the Azure Cosmos DB for NoSQL and API for Gremlin accounts only. This article describes how to use the bulk executor .NET library with API for NoSQL accounts. To learn about using the bulk executor .NET library with API for Gremlin accounts, see [perform bulk operations in the Azure Cosmos DB for Gremlin](../gremlin/bulk-executor-dotnet.md).
+Currently, bulk executor library is supported by the Azure Cosmos DB for NoSQL and API for Gremlin accounts only. This article describes how to use the bulk executor .NET library with API for NoSQL accounts. To learn how to use the bulk executor .NET library with API for Gremlin accounts, see [Ingest data in bulk in the Azure Cosmos DB for Gremlin by using a bulk executor library](../gremlin/bulk-executor-dotnet.md).
## Prerequisites
Currently, bulk executor library is supported by the Azure Cosmos DB for NoSQL a
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-* You can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
+* You can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also [Install and use the Azure Cosmos DB Emulator for local development and testing](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
-* Create an Azure Cosmos DB for NoSQL account by using the steps described in the [create a database account](quickstart-dotnet.md#create-account) section of the .NET quickstart article.
+* Create an Azure Cosmos DB for NoSQL account by using the steps described in the [Create an Azure Cosmos DB account](quickstart-dotnet.md#create-account) section of [Quickstart: Azure Cosmos DB for NoSQL client library for .NET](quickstart-dotnet.md).
## Clone the sample application
-Now let's switch to working with code by downloading a sample .NET application from GitHub. This application performs bulk operations on the data stored in your Azure Cosmos DB account. To clone the application, open a command prompt, navigate to the directory where you want to copy it and run the following command:
+Now let's switch to working with code by downloading a sample .NET application from GitHub. This application performs bulk operations on the data stored in your Azure Cosmos DB account. To clone the application, open a command prompt, navigate to the directory where you want to copy it, and run the following command:
```bash git clone https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started.git ```
-The cloned repository contains two samples "BulkImportSample" and "BulkUpdateSample". You can open either of the sample applications, update the connection strings in App.config file with your Azure Cosmos DB account's connection strings, build the solution, and run it.
+The cloned repository contains two samples, *BulkImportSample* and *BulkUpdateSample*. You can open either of the sample applications, update the connection strings in *App.config* file with your Azure Cosmos DB account's connection strings, build the solution, and run it.
-The "BulkImportSample" application generates random documents and bulk imports them to your Azure Cosmos DB account. The "BulkUpdateSample" application bulk updates the imported documents by specifying patches as operations to perform on specific document fields. In the next sections, you'll review the code in each of these sample apps.
+The *BulkImportSample* application generates random documents and bulk imports them to your Azure Cosmos DB account. The *BulkUpdateSample* application bulk updates the imported documents by specifying patches as operations to perform on specific document fields. In the next sections, you'll review the code in each of these sample apps.
## <a id="bulk-import-data-to-an-azure-cosmos-account"></a>Bulk import data to an Azure Cosmos DB account
-1. Navigate to the "BulkImportSample" folder and open the "BulkImportSample.sln" file.
+1. Navigate to the *BulkImportSample* folder and open the *BulkImportSample.sln* file.
-2. The Azure Cosmos DB's connection strings are retrieved from the App.config file as shown in the following code:
+1. The Azure Cosmos DB's connection strings are retrieved from the App.config file as shown in the following code:
```csharp private static readonly string EndpointUrl = ConfigurationManager.AppSettings["EndPointUrl"];
The "BulkImportSample" application generates random documents and bulk imports t
The bulk importer creates a new database and a container with the database name, container name, and the throughput values specified in the App.config file.
-3. Next the DocumentClient object is initialized with Direct TCP connection mode:
+1. Next, the *DocumentClient* object is initialized with Direct TCP connection mode:
```csharp ConnectionPolicy connectionPolicy = new ConnectionPolicy
The "BulkImportSample" application generates random documents and bulk imports t
connectionPolicy) ```
-4. The BulkExecutor object is initialized with a high retry value for wait time and throttled requests. And then they're set to 0 to pass congestion control to BulkExecutor for its lifetime.
+1. The *BulkExecutor* object is initialized with a high retry value for wait time and throttled requests. Then they're set to 0 to pass congestion control to *BulkExecutor* for its lifetime.
```csharp // Set retry options high during initialization (default values).
The "BulkImportSample" application generates random documents and bulk imports t
client.ConnectionPolicy.RetryOptions.MaxRetryAttemptsOnThrottledRequests = 0; ```
-5. The application invokes the BulkImportAsync API. The .NET library provides two overloads of the bulk import API - one that accepts a list of serialized JSON documents and the other that accepts a list of deserialized POCO documents. To learn more about the definitions of each of these overloaded methods, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkimportasync).
+1. The application invokes the *BulkImportAsync* API. The .NET library provides two overloads of the bulk import API&mdash;one that accepts a list of serialized JSON documents and the other that accepts a list of deserialized POCO documents. To learn more about the definitions of each of these overloaded methods, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkexecutor.bulkimportasync).
```csharp BulkImportResponse bulkImportResponse = await bulkExecutor.BulkImportAsync(
The "BulkImportSample" application generates random documents and bulk imports t
cancellationToken: token); ``` **BulkImportAsync method accepts the following parameters:**
-
- |**Parameter** |**Description** |
+
+ |**Parameter** |**Description** |
|||
- |enableUpsert | A flag to enable upsert operations on the documents. If a document with the given ID already exists, it's updated. By default, it's set to false. |
- |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it's set to true. |
- |maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting to null will cause library to use a default value of 20. |
- |maxInMemorySortingBatchSize | The maximum number of documents that are pulled from the document enumerator, which is passed to the API call in each stage. For in-memory sorting phase that happens before bulk importing, setting this parameter to null will cause library to use default minimum value (documents.count, 1000000). |
- |cancellationToken | The cancellation token to gracefully exit the bulk import operation. |
+ |*enableUpsert* | A flag to enable upsert operations on the documents. If a document with the given ID already exists, it's updated. By default, it's set to false. |
+ |*disableAutomaticIdGeneration* | A flag to disable automatic generation of ID. By default, it's set to true. |
+ |*maxConcurrencyPerPartitionKeyRange* | The maximum degree of concurrency per partition key range. Setting to null causes the library to use a default value of 20. |
+ |*maxInMemorySortingBatchSize* | The maximum number of documents that are pulled from the document enumerator, which is passed to the API call in each stage. For the in-memory sorting phase that happens before bulk importing. Setting this parameter to null causes the library to use default minimum value (documents.count, 1000000). |
+ |*cancellationToken* | The cancellation token to gracefully exit the bulk import operation. |
- **Bulk import response object definition**
- The result of the bulk import API call contains the following attributes:
+**Bulk import response object definition**<br>
+The result of the bulk import API call contains the following attributes:
- |**Parameter** |**Description** |
+ |**Parameter** |**Description** |
|||
- |NumberOfDocumentsImported (long) | The total number of documents that were successfully imported out of the total documents supplied to the bulk import API call. |
- |TotalRequestUnitsConsumed (double) | The total request units (RU) consumed by the bulk import API call. |
- |TotalTimeTaken (TimeSpan) | The total time taken by the bulk import API call to complete the execution. |
- |BadInputDocuments (List\<object>) | The list of bad-format documents that weren't successfully imported in the bulk import API call. Fix the documents returned and retry import. Bad-formatted documents include documents whose ID value isn't a string (null or any other datatype is considered invalid). |
+ |*NumberOfDocumentsImported* (long) | The total number of documents that were successfully imported out of the total documents supplied to the bulk import API call. |
+ |*TotalRequestUnitsConsumed* (double) | The total request units (RU) consumed by the bulk import API call. |
+ |*TotalTimeTaken* (TimeSpan) | The total time taken by the bulk import API call to complete the execution. |
+ |*BadInputDocuments* (List\<object>) | The list of bad-format documents that weren't successfully imported in the bulk import API call. Fix the documents returned and retry import. Bad-formatted documents include documents whose ID value isn't a string (null or any other datatype is considered invalid). |
## Bulk update data in your Azure Cosmos DB account
-You can update existing documents by using the BulkUpdateAsync API. In this example, you'll set the `Name` field to a new value and remove the `Description` field from the existing documents. For the full set of supported update operations, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
+You can update existing documents by using the *BulkUpdateAsync* API. In this example, you set the `Name` field to a new value and remove the `Description` field from the existing documents. For the full set of supported update operations, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
-1. Navigate to the "BulkUpdateSample" folder and open the "BulkUpdateSample.sln" file.
+1. Navigate to the *BulkUpdateSample* folder and open the *BulkUpdateSample.sln* file.
-2. Define the update items along with the corresponding field update operations. In this example, you'll use `SetUpdateOperation` to update the `Name` field and `UnsetUpdateOperation` to remove the `Description` field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
+1. Define the update items along with the corresponding field update operations. In this example, you use *SetUpdateOperation* to update the `Name` field and *UnsetUpdateOperation* to remove the `Description` field from all the documents. You can also perform other operations like incrementing a document field by a specific value, pushing specific values into an array field, or removing a specific value from an array field. To learn about different methods provided by the bulk update API, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.bulkupdate).
```csharp SetUpdateOperation<string> nameUpdate = new SetUpdateOperation<string>("Name", "UpdatedDoc");
You can update existing documents by using the BulkUpdateAsync API. In this exam
} ```
-3. The application invokes the BulkUpdateAsync API. To learn about the definition of the BulkUpdateAsync method, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.ibulkexecutor.bulkupdateasync).
+1. The application invokes the *BulkUpdateAsync* API. To learn about the definition of the *BulkUpdateAsync* method, refer to the [API documentation](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.ibulkexecutor.bulkupdateasync).
```csharp BulkUpdateResponse bulkUpdateResponse = await bulkExecutor.BulkUpdateAsync(
You can update existing documents by using the BulkUpdateAsync API. In this exam
``` **BulkUpdateAsync method accepts the following parameters:**
- |**Parameter** |**Description** |
+ |**Parameter** |**Description** |
|||
- |maxConcurrencyPerPartitionKeyRange | The maximum degree of concurrency per partition key range, setting this parameter to null will make the library to use the default value(20). |
- |maxInMemorySortingBatchSize | The maximum number of update items pulled from the update items enumerator passed to the API call in each stage. For the in-memory sorting phase that happens before bulk updating, setting this parameter to null will cause the library to use the default minimum value(updateItems.count, 1000000). |
- | cancellationToken|The cancellation token to gracefully exit the bulk update operation. |
+ |*maxConcurrencyPerPartitionKeyRange* | The maximum degree of concurrency per partition key range. Setting this parameter to null makes the library use the default value(20). |
+ |*maxInMemorySortingBatchSize* | The maximum number of update items pulled from the update items enumerator passed to the API call in each stage. For the in-memory sorting phase that happens before bulk updating. Setting this parameter to null causes the library use the default minimum value(updateItems.count, 1000000). |
+ |*cancellationToken*|The cancellation token to gracefully exit the bulk update operation. |
- **Bulk update response object definition**
- The result of the bulk update API call contains the following attributes:
+**Bulk update response object definition**<br>
+The result of the bulk update API call contains the following attributes:
- |**Parameter** |**Description** |
+ |**Parameter** |**Description** |
|||
- |NumberOfDocumentsUpdated (long) | The number of documents that were successfully updated out of the total documents supplied to the bulk update API call. |
- |TotalRequestUnitsConsumed (double) | The total request units (RUs) consumed by the bulk update API call. |
- |TotalTimeTaken (TimeSpan) | The total time taken by the bulk update API call to complete the execution. |
-
-## Performance tips
+ |*NumberOfDocumentsUpdated* (long) | The number of documents that were successfully updated out of the total documents supplied to the bulk update API call. |
+ |*TotalRequestUnitsConsumed* (double) | The total request units (RUs) consumed by the bulk update API call. |
+ |*TotalTimeTaken* (TimeSpan) | The total time taken by the bulk update API call to complete the execution. |
+
+## Performance tips
-Consider the following points for better performance when using the bulk executor library:
+Consider the following points for better performance when you use the bulk executor library:
-* For best performance, run your application from an Azure virtual machine that is in the same region as your Azure Cosmos DB account's write region.
+* For best performance, run your application from an Azure virtual machine that's in the same region as your Azure Cosmos DB account's write region.
-* It's recommended that you instantiate a single `BulkExecutor` object for the whole application within a single virtual machine that corresponds to a specific Azure Cosmos DB container.
+* It's recommended that you instantiate a single *BulkExecutor* object for the whole application within a single virtual machine that corresponds to a specific Azure Cosmos DB container.
-* A single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO (This happens by spawning multiple tasks internally). Avoid spawning multiple concurrent tasks within your application process that execute bulk operation API calls. If a single bulk operation API call that is running on a single virtual machine is unable to consume the entire container's throughput (if your container's throughput > 1 million RU/s), it's preferred to create separate virtual machines to concurrently execute the bulk operation API calls.
+* A single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO when spawning multiple tasks internally. Avoid spawning multiple concurrent tasks within your application process that execute bulk operation API calls. If a single bulk operation API call that's running on a single virtual machine is unable to consume the entire container's throughput (if your container's throughput > 1 million RU/s), it's preferred to create separate virtual machines to concurrently execute the bulk operation API calls.
-* Ensure the `InitializeAsync()` method is invoked after instantiating a BulkExecutor object to fetch the target Azure Cosmos DB container's partition map.
+* Ensure the `InitializeAsync()` method is invoked after instantiating a BulkExecutor object to fetch the target Azure Cosmos DB container's partition map.
* In your application's App.Config, ensure **gcServer** is enabled for better performance ```xml
Consider the following points for better performance when using the bulk executo
<gcServer enabled="true" /> </runtime> ```
-* The library emits traces that can be collected either into a log file or on the console. To enable both, add the following code to your application's App.Config file.
+* The library emits traces that can be collected either into a log file or on the console. To enable both, add the following code to your application's *App.Config* file:
```xml <system.diagnostics>
Consider the following points for better performance when using the bulk executo
## Next steps
-* To learn about the NuGet package details and the release notes, see the [bulk executor SDK details](sdk-dotnet-bulk-executor-v2.md).
+* [.NET bulk executor library: Download information (Legacy)](sdk-dotnet-bulk-executor-v2.md).
cosmos-db Estimate Ru With Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/estimate-ru-with-capacity-planner.md
Title: Estimate costs using the Azure Cosmos DB capacity planner - API for NoSQL
-description: The Azure Cosmos DB capacity planner allows you to estimate the throughput (RU/s) required and cost for your workload. This article describes how to use the capacity planner to estimate the throughput and cost required when using API for NoSQL.
+description: Learn how to use Azure Cosmos DB capacity planner to estimate the throughput and cost required when using Azure Cosmos DB for NoSQL.
Previously updated : 08/26/2021 Last updated : 03/15/2023
-# Estimate RU/s using the Azure Cosmos DB capacity planner - API for NoSQL
+# Estimate RU/s using the Azure Cosmos DB capacity planner - Azure Cosmos DB for NoSQL
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!NOTE]
-> If you are planning a data migration to Azure Cosmos DB and all that you know is the number of vcores and servers in your existing sharded and replicated database cluster, please also read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
->
+> If you're planning a data migration to Azure Cosmos DB and all that you know is the number of vcores and servers in your existing sharded and replicated database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
-Configuring your Azure Cosmos DB databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](../request-units.md), for your workload is essential to optimizing cost and performance. This article describes how to use the Azure Cosmos DB [capacity planner](https://cosmos.azure.com/capacitycalculator/) to get an estimate of the required RU/s and cost of your workload when using the API for NoSQL. If you are using API for MongoDB, see how to [use capacity calculator with MongoDB](../mongodb/estimate-ru-capacity-planner.md) article.
+Configuring your Azure Cosmos DB databases and containers with the right amount of provisioned throughput, or [Request Units (RU/s)](../request-units.md), for your workload is essential to optimizing cost and performance. This article describes how to use the Azure Cosmos DB [capacity planner](https://cosmos.azure.com/capacitycalculator/) to estimate the required RU/s and cost of your workload when using Azure Cosmos DB for NoSQL. If you're using Azure Cosmos DB for MongoDB, see [Estimate RU/s - Azure Cosmos DB for MongoDB](../mongodb/estimate-ru-capacity-planner.md).
[!INCLUDE [capacity planner modes](../includes/capacity-planner-modes.md)] ## <a id="basic-mode"></a>Estimate provisioned throughput and cost using basic mode
-To get a quick estimate for your workload using the basic mode, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/). Enter in the following parameters based on your workload:
-|**Input** |**Description** |
-|||
-| API |Choose API for NoSQL |
-|Number of regions|Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Azure Cosmos DB account. See [global distribution](../distribute-data-globally.md) in Azure Cosmos DB for more details.|
-|Multi-region writes|If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](../optimize-cost-regions.md) article.|
-|Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.|
-|Use analytical store| Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, it represents the estimated data stored (GB) in the analytical store in a single region. |
-|Item size|The estimated size of the data item (for example, document), ranging from 1 KB to 2 MB. |
-|Queries/sec |Number of queries expected per second per region. The average RU charge to run a query is estimated at 10 RUs. |
-|Point reads/sec |Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. To learn more about point reads, see the [options to read data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries) article. |
-|Creates/sec |Number of create operations expected per second per region. |
-|Updates/sec |Number of update operations expected per second per region. When you choose automatic indexing, the estimated RU/s for the update operation is calculated as one property being changed per an update. |
-|Deletes/sec |Number of delete operations expected per second per region. |
-
-After filling the required details, select **Calculate**. The **Cost Estimate** tab shows the total cost for storage and provisioned throughput. You can expand the **Show Details** link in this tab to get the breakdown of the throughput required for different CRUD and query requests. Each time you change the value of any field, select **Calculate** to recalculate the estimated cost.
+To get a quick estimate for your workload using the basic mode, open the [capacity planner](https://cosmos.azure.com/capacitycalculator/). Enter the following parameters based on your workload:
+| Input | Description |
+|||
+| API | Choose *Azure Cosmos DB for NoSQL*. |
+| Number of regions | Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Azure Cosmos DB account. For more information, see [Distribute your data globally with Azure Cosmos DB](../distribute-data-globally.md). |
+| Multi-region writes | If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. For more information, see [Optimize multi-region cost in Azure Cosmos DB](../optimize-cost-regions.md). |
+| Total data stored in transactional store | Total estimated data stored, in GB, in the transactional store in a single region. |
+| Use Analytical Store | Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, which represents the estimated data stored, in GB, in the analytical store in a single region. |
+| Item size | The estimated size of the data item, for example, document. |
+| Point reads/sec in max-read region | Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. For more information about point reads, see [Reading data: point reads and queries](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries). |
+| Creates/sec across all regions | Number of create operations expected per second per region. |
+| Updates/sec across all regions | Number of update operations expected per second per region. When you choose automatic indexing, the estimated RU/s for the update operation is calculated as one property being changed per an update. |
+| Deletes/sec across all regions | Number of delete operations expected per second per region. |
+| Queries/sec across all regions | Number of queries expected per second per region. The average RU charge to run a query is estimated at 10 RUs. |
+
+After you fill in the required details, select **Calculate**. The **Cost Estimate** table shows the total cost for storage and provisioned throughput. You can expand the **Show Details** link to get the breakdown of the throughput required for different CRUD and query requests. Each time you change the value of any field, select **Calculate** to recalculate the estimated cost.
+ ## <a id="advanced-mode"></a>Estimate provisioned throughput and cost using advanced mode
-Advanced mode allows you to provide more settings that impact the RU/s estimate. To use this option, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/) and **sign in** to the tool with an account you use for Azure. The sign-in option is available at the right-hand corner.
+Advanced mode allows you to provide more settings that affect the RU/s estimate. To use this option, go to the [capacity planner](https://cosmos.azure.com/capacitycalculator/) and sign in with an account you use for Azure. The **Sign In** option is available at the right-hand corner.
After you sign in, you can see more fields compared to the fields in basic mode. Enter the other parameters based on your workload.
-|**Input** |**Description** |
+| Input | Description |
|||
-|API|Azure Cosmos DB is a multi-model and multi-API service. Choose API for NoSQL. |
-|Number of regions|Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Azure Cosmos DB account. See [global distribution](../distribute-data-globally.md) in Azure Cosmos DB for more details.|
-|Multi-region writes|If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](../optimize-cost-regions.md) article.|
-|Default consistency|Azure Cosmos DB supports 5 consistency levels, to allow developers to balance the tradeoff between consistency, availability, and latency tradeoffs. To learn more, see the [consistency levels](../consistency-levels.md) article. <br/><br/> By default, Azure Cosmos DB uses session consistency, which guarantees the ability to read your own writes in a session. <br/><br/> Choosing strong or bounded staleness will require double the required RU/s for reads, when compared to session, consistent prefix, and eventual consistency. Strong consistency with multi-region writes is not supported and will automatically default to single-region writes with strong consistency. |
-|Indexing policy|By default, Azure Cosmos DB [indexes all properties](../index-policy.md) in all items for flexible and efficient queries (maps to the **Automatic** indexing policy). <br/><br/> If you choose **off**, none of the properties are indexed. This results in the lowest RU charge for writes. Select **off** policy if you expect to only do [point reads](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) (key value lookups) and/or writes, and no queries. <br/><br/> If you choose **Automatic**, Azure Cosmos DB automatically indexes all the items as they are written. <br/><br/> **Custom** indexing policy allows you to include or exclude specific properties from the index for lower write throughput and storage. To learn more, see [indexing policy](../index-overview.md) and [sample indexing policies](how-to-manage-indexing-policy.md#indexing-policy-examples) articles.|
-|Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.|
-|Use analytical store| Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, it represents the estimated data stored(GB) in the analytical store in a single region. |
-|Workload mode|Select **Steady** option if your workload volume is constant. <br/><br/> Select **Variable** option if your workload volume changes over time. For example, during a specific day or a month. The following setting is available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. </li></ul> <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: `(9 hours per weekday at peak * 5 days per week at peak) / (24 hours per day at peak * 7 days in a week) = 45 / 168 = ~27%`.<br/><br/>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
-|Item size|The size of the data item (for example, document), ranging from 1 KB to 2 MB. You can add estimates for multiple sample items. <br/><br/>You can also **Upload sample (JSON)** document for a more accurate estimate.<br/><br/>If your workload has multiple types of items (with different JSON content) in the same container, you can upload multiple JSON documents and get the estimate. Use the **Add new item** button to add multiple sample JSON documents.|
+| API | Azure Cosmos DB is a multi-model and multi-API service. Choose *Azure Cosmos DB for NoSQL*. |
+| Number of regions | Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Azure Cosmos DB account. For more information, see [Distribute your data globally with Azure Cosmos DB](../distribute-data-globally.md). |
+| Multi-region writes | If you enable [multi-region writes](../distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. For more information, see [Optimize multi-region cost in Azure Cosmos DB](../optimize-cost-regions.md). |
+| Default consistency | Azure Cosmos DB supports five consistency levels to allow you to balance the consistency, availability, and latency tradeoffs. For more information, see [consistency levels](../consistency-levels.md). By default, Azure Cosmos DB uses **Session** consistency, which guarantees the ability to read your own writes in a session. Choosing **Strong** or **Bounded staleness** requires double the required RU/s for reads, when compared to **Session**, **Consistent prefix**, and **Eventual** consistency. **Strong** consistency with multi-region writes isn't supported and automatically defaults to single-region writes with **strong** consistency. |
+| Indexing policy | By default, Azure Cosmos DB [indexes all properties](../index-policy.md) in all items for flexible and efficient queries. This approach maps to the **Automatic** indexing policy. If you choose **Off**, none of the properties are indexed. This approach results in the lowest RU charge for writes. Select **Off** if you expect to only do [point reads](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) (key value lookups) and writes, and no queries. If you choose **Automatic**, Azure Cosmos DB automatically indexes all the items as they're written. The **Custom** indexing policy allows you to include or exclude specific properties from the index for lower write throughput and storage. For more information, see [Indexing in Azure Cosmos DB](../index-overview.md) and [Indexing policy examples](how-to-manage-indexing-policy.md#indexing-policy-examples).|
+|Total data stored in transactional store | Total estimated data stored, in GB, in the transactional store in a single region.|
+| Use Analytical Store | Choose **On** if you want to use analytical store. Enter the **Total data stored in analytical store**, which represents the estimated data stored, in GB, in the analytical store in a single region. |
+| Workload mode | Select **Steady** if your workload volume is constant. Select **Variable** if your workload volume changes over time, for example, during a specific day or a month. The **Percentage of time at peak** setting is available if you choose the **Variable** workload option.
+| Percentage of time at peak | Available only with **Variable** workload option. Percentage of time in a month where your workload requires peak (highest) throughput. For example, if you have a workload that has high activity during 9 AM ΓÇô 6 PM weekday business hours, then the percentage of time at peak is: `(9 hours per weekday at peak * 5 days per week at peak) / (24 hours per day at peak * 7 days in a week) = 45 / 168 = ~27%`. With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](../set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.|
+| Item size | The size of the data item, for example, document. You can add estimates for multiple sample items. You can also **Upload sample (JSON)** document for a more accurate estimate. If your workload has multiple types of items with different JSON content in the same container, you can upload multiple JSON documents and get the estimate. Select **Add new item** to add multiple sample JSON documents.|
| Number of properties | The average number of properties per an item. |
-|Point reads/sec |Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. Point read operations are different from query read operations. To learn more about point reads, see the [options to read data](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries) article. If your workload mode is **Variable**, you can provide the expected number of point read operations at peak and off peak. |
+|Point reads/sec | Number of point read operations expected per second per region. Point reads are the key/value lookup on a single item ID and a partition key. Point read operations are different from query read operations. For more information about point reads, see [Reading data: point reads and queries](../optimize-cost-reads-writes.md#reading-data-point-reads-and-queries). If your workload mode is **Variable**, you can provide the expected number of point read operations at peak and off peak. |
|Creates/sec |Number of create operations expected per second per region. | |Updates/sec |Number of update operations expected per second per region. | |Deletes/sec |Number of delete operations expected per second per region. | |Queries/sec |Number of queries expected per second per region. For an accurate estimate, either use the average cost of queries or enter the RU/s your queries use from query stats in Azure portal. |
-| Average RU/s charge per query | By default, the average cost of queries/sec per region is estimated at 10 RU/s. You can increase or decrease it based on the RU/s charges based on your estimated query charge.|
-
-You can also use the **Save Estimate** button to download a CSV file containing the current estimate.
+| Average RU/s charge per query | By default, the average cost of queries/sec per region is estimated at 10 RU/s. You can increase or decrease it based on the RU/s charges based on your estimated query charge. |
-The prices shown in the Azure Cosmos DB capacity planner are estimates based on the public pricing rates for throughput and storage. All prices are shown in US dollars. Refer to the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to see all rates by region.
+The prices shown in the Azure Cosmos DB capacity planner are estimates based on the public pricing rates for throughput and storage. All prices are shown in US dollars. To see all rates by region, see [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/).
## Next steps
-* If all you know is the number of vcores and servers in your existing sharded and replicated database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* Learn more about [Azure Cosmos DB's pricing model](../how-pricing-works.md).
-* Create a new [Azure Cosmos DB account, database, and container](quickstart-portal.md).
-* Learn how to [optimize provisioned throughput cost](../optimize-cost-throughput.md).
-* Learn how to [optimize cost with reserved capacity](../reserved-capacity.md).
-
+* [Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s](../convert-vcore-to-request-unit.md)
+* [Pricing model in Azure Cosmos DB](../how-pricing-works.md)
cosmos-db Performance Tips Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-java-sdk-v4.md
After result is received if you want to do CPU intensive work on the result you
Based on the type of your work you should use the appropriate existing Reactor Scheduler for your work. Read here [``Schedulers``](https://projectreactor.io/docs/core/release/api/reactor/core/scheduler/Schedulers.html).
+To further understand the threading and scheduling model of project Reactor, refer to this [blog post by Project Reactor](https://spring.io/blog/2019/12/13/flight-of-the-flux-3-hopping-threads-and-schedulers).
+ For more information on Azure Cosmos DB Java SDK v4, please look at the [Azure Cosmos DB directory of the Azure SDK for Java monorepo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-cosmos). * **Optimize logging settings in your application**
cosmos-db Tutorial Nodejs Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-nodejs-web-app.md
Title: 'Tutorial: Build a Node.js web app with Azure Cosmos DB JavaScript SDK to manage API for NoSQL data'
-description: This Node.js tutorial explores how to use Microsoft Azure Cosmos DB to store and access data from a Node.js Express web application hosted on Web Apps feature of Microsoft Azure App Service.
+ Title: 'Tutorial: Build a Node.js web app by using the JavaScript SDK to manage an API for NoSQL account in Azure Cosmos DB'
+description: Learn how to use Azure Cosmos DB to store and access data from a Node.js Express web application hosted on the Web Apps feature of the Azure App Service.
ms.devlang: javascript Previously updated : 10/18/2021 Last updated : 03/28/2023 #Customer intent: As a developer, I want to build a Node.js web application to access and manage API for NoSQL account resources in Azure Cosmos DB, so that customers can better use the service.
-# Tutorial: Build a Node.js web app using the JavaScript SDK to manage a API for NoSQL account in Azure Cosmos DB
+# Tutorial: Build a Node.js web app by using the JavaScript SDK to manage an API for NoSQL account in Azure Cosmos DB
+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!div class="op_single_selector"]
> * [Node.js](tutorial-nodejs-web-app.md) >
-As a developer, you might have applications that use NoSQL document data. You can use a API for NoSQL account in Azure Cosmos DB to store and access this document data. This Node.js tutorial shows you how to store and access data from a API for NoSQL account in Azure Cosmos DB by using a Node.js Express application that is hosted on the Web Apps feature of Microsoft Azure App Service. In this tutorial, you will build a web-based application (Todo app) that allows you to create, retrieve, and complete tasks. The tasks are stored as JSON documents in Azure Cosmos DB.
+As a developer, you might have applications that use NoSQL document data. You can use an API for NoSQL account in Azure Cosmos DB to store and access this document data. This Node.js tutorial shows you how to store and access data from an API for NoSQL account in Azure Cosmos DB. The tutorial uses a Node.js Express application that's hosted on the Web Apps feature of Microsoft Azure App Service. In this tutorial, you build a web-based application (Todo app) that allows you to create, retrieve, and complete tasks. The tasks are stored as JSON documents in Azure Cosmos DB.
+
+This tutorial demonstrates how to create an API for NoSQL account in Azure Cosmos DB by using the Azure portal. Without a credit card or an Azure subscription, you can:
-This tutorial demonstrates how to create a API for NoSQL account in Azure Cosmos DB by using the Azure portal. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). You then build and run a web app that is built on the Node.js SDK to create a database and container, and add items to the container. This tutorial uses JavaScript SDK version 3.0.
+* Set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
+* Build and run a web app that's built on the Node.js SDK to create a database and container.
+* Add items to the container.
-This tutorial covers the following tasks:
+This tutorial uses JavaScript SDK version 3.0 and covers the following tasks:
> [!div class="checklist"] > * Create an Azure Cosmos DB account
This tutorial covers the following tasks:
## <a name="prerequisites"></a>Prerequisites
-Before following the instructions in this article, ensure
-that you have the following resources:
+Before following the instructions in this article, ensure that you have the following resources:
-* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
+* Without an Azure subscription, a credit card, or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
that you have the following resources:
* Install [Git][Git] on your local workstation. ## <a name="create-account"></a>Create an Azure Cosmos DB account
-Let's start by creating an Azure Cosmos DB account. If you already have an account or if you are using the Azure Cosmos DB Emulator for this tutorial, you can skip to [Step 2: Create a new Node.js application](#create-new-app).
+
+Start by creating an Azure Cosmos DB account. If you already have an account or if you use the Azure Cosmos DB Emulator for this tutorial, you can skip to [Create a new Node.js application](#create-new-app).
[!INCLUDE [cosmos-db-create-dbaccount](../includes/cosmos-db-create-dbaccount.md)] [!INCLUDE [cosmos-db-keys](../includes/cosmos-db-keys.md)] ## <a name="create-new-app"></a>Create a new Node.js application
-Now let's learn to create a basic Hello World Node.js project using the Express framework.
+
+Now, learn how create a basic Hello World Node.js project by using the Express framework.
1. Open your favorite terminal, such as the Node.js command prompt.
Now let's learn to create a basic Hello World Node.js project using the Express
npm start ```
-1. You can view your new application by navigating your browser to `http://localhost:3000`.
-
- :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-express.png" alt-text="Learn Node.js - Screenshot of the Hello World application in a browser window":::
+1. To view your new application in a browser, go to `http://localhost:3000`.
+
+ :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-express.png" alt-text="Screenshot of the Hello World application in a browser window.":::
Stop the application by using CTRL+C in the terminal window, and select **y** to terminate the batch job. ## <a name="install-modules"></a>Install the required modules
-The **package.json** file is one of the files created in the root of the project. This file contains a list of other modules that are required for your Node.js application. When you deploy this application to Azure, this file is used to determine which modules should be installed on Azure to support your application. Install two more packages for this tutorial.
+The *package.json* file is one of the files created in the root of the project. This file contains a list of other modules that are required for your Node.js application. When you deploy this application to Azure, this file is used to determine which modules should be installed on Azure to support your application. Install two more packages for this tutorial.
-1. Install the **\@azure/cosmos** module via npm.
+1. Install the **\@azure/cosmos** module via npm.
```bash npm install @azure/cosmos ``` ## <a name="connect-to-database"></a>Connect the Node.js application to Azure Cosmos DB
-Now that you have completed the initial setup and configuration, next you will write code that is required by the todo application to communicate with Azure Cosmos DB.
+
+After you've completed the initial setup and configuration, learn how to write the code that the todo application requires to communicate with Azure Cosmos DB.
### Create the model
-1. At the root of your project directory, create a new directory named **models**.
-2. In the **models** directory, create a new file named **taskDao.js**. This file contains code required to create the database and container. It also defines methods to read, update, create, and find tasks in Azure Cosmos DB.
+1. At the root of your project directory, create a new directory named **models**.
+
+1. In the **models** directory, create a new file named *taskDao.js*. This file contains code required to create the database and container. It also defines methods to read, update, create, and find tasks in Azure Cosmos DB.
-3. Copy the following code into the **taskDao.js** file:
+1. Copy the following code into the *taskDao.js* file:
```javascript // @ts-check
Now that you have completed the initial setup and configuration, next you will w
module.exports = TaskDao ```
-4. Save and close the **taskDao.js** file.
+
+1. Save and close the *taskDao.js* file.
### Create the controller
-1. In the **routes** directory of your project, create a new file named **tasklist.js**.
+1. In the **routes** directory of your project, create a new file named *tasklist.js*.
-2. Add the following code to **tasklist.js**. This code loads the CosmosClient and async modules, which are used by **tasklist.js**. This code also defines the **TaskList** class, which is passed as an instance of the **TaskDao** object we defined earlier:
+1. Add the following code to *tasklist.js*. This code loads the CosmosClient and async modules, which are used by *tasklist.js*. This code also defines the *TaskList* class, which is passed as an instance of the *TaskDao* object we defined earlier:
```javascript const TaskDao = require("../models/TaskDao");
Now that you have completed the initial setup and configuration, next you will w
module.exports = TaskList; ```
-3. Save and close the **tasklist.js** file.
+1. Save and close the *tasklist.js* file.
### Add config.js
-1. At the root of your project directory, create a new file named **config.js**.
+1. At the root of your project directory, create a new file named *config.js*.
-2. Add the following code to **config.js** file. This code defines configuration settings and values needed for our application.
+1. Add the following code to *config.js* file. This code defines configuration settings and values needed for our application.
```javascript const config = {};
Now that you have completed the initial setup and configuration, next you will w
module.exports = config; ```
-3. In the **config.js** file, update the values of HOST and AUTH_KEY using the values found in the Keys page of your Azure Cosmos DB account on the [Azure portal](https://portal.azure.com).
+1. In the *config.js* file, update the values of HOST and AUTH_KEY by using the values found in the **Keys** page of your Azure Cosmos DB account on the [Azure portal](https://portal.azure.com).
-4. Save and close the **config.js** file.
+1. Save and close the *config.js* file.
### Modify app.js
-1. In the project directory, open the **app.js** file. This file was created earlier when the Express web application was created.
+1. In the project directory, open the *app.js* file. This file was created earlier when the Express web application was created.
-2. Add the following code to the **app.js** file. This code defines the config file to be used, and loads the values into some variables that you will use in the next sections.
+1. Add the following code to the *app.js* file. This code defines the config file to be used and loads the values into some variables that you'll use in the next sections.
```javascript const CosmosClient = require('@azure/cosmos').CosmosClient
Now that you have completed the initial setup and configuration, next you will w
module.exports = app ```
-3. Finally, save and close the **app.js** file.
+1. Finally, save and close the *app.js* file.
## <a name="build-ui"></a>Build a user interface
-Now let's build the user interface so that a user can interact with the application. The Express application we created in the previous sections uses **Jade** as the view engine.
+Now build the user interface so that a user can interact with the application. The Express application you created in the previous sections uses **Jade** as the view engine.
-1. The **layout.jade** file in the **views** directory is used as a global template for other **.jade** files. In this step you will modify it to use Twitter Bootstrap, which is a toolkit used to design a website.
+1. The *layout.jade* file in the **views** directory is used as a global template for other *.jade* files. In this step, you modify it to use Twitter Bootstrap, which is a toolkit used to design a website.
-2. Open the **layout.jade** file found in the **views** folder and replace the contents with the following code:
+1. Open the *layout.jade* file found in the **views** folder and replace the contents with the following code:
```html doctype html
Now let's build the user interface so that a user can interact with the applicat
script(src='//ajax.aspnetcdn.com/ajax/bootstrap/3.3.2/bootstrap.min.js') ```
- This code tells the **Jade** engine to render some HTML for our application, and creates a **block** called **content** where we can supply the layout for our content pages. Save and close the **layout.jade** file.
+ This code tells the **Jade** engine to render some HTML for the application and creates a **block** called **content** where you can supply the layout for the content pages. Save and close the *layout.jade* file.
-3. Now open the **index.jade** file, the view that will be used by our application, and replace the content of the file with the following code:
+1. Open the *index.jade* file, the view used by the application. Replace the content of the file with the following code:
```html extends layout
Now let's build the user interface so that a user can interact with the applicat
button.btn(type="submit") Add item ```
-This code extends layout, and provides content for the **content** placeholder we saw in the **layout.jade** file earlier. In this layout, we created two HTML forms.
+This code extends layout and provides content for the **content** placeholder you saw in the *layout.jade* file. In that layout, you created two HTML forms.
-The first form contains a table for your data and a button that allows you to update items by posting to **/completeTask** method of the controller.
-
-The second form contains two input fields and a button that allows you to create a new item by posting to **/addtask** method of the controller. That's all we need for the application to work.
+The first form contains a table for your data and a button that allows you to update items by posting to the */completeTask* method of the controller.
+
+The second form contains two input fields and a button that allows you to create a new item by posting to the */addtask* method of the controller, which is all you need for the application to work.
## <a name="run-app-locally"></a>Run your application locally
-Now that you have built the application, you can run it locally by using the following steps:
+After you've built the application, you can run it locally by using the following steps:
-1. To test the application on your local machine, run `npm start` in the terminal to start your application, and then refresh the `http://localhost:3000` browser page. The page should now look as shown in the following screenshot:
-
- :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-localhost.png" alt-text="Screenshot of the MyTodo List application in a browser window":::
+1. To test the application on your local machine, run `npm start` in the terminal to start your application, and then refresh the `http://localhost:3000` page. The page should now look like the following screenshot:
+
+ :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-localhost.png" alt-text="Screenshot of the My Todo List application in a browser.":::
> [!TIP]
- > If you receive an error about the indent in the layout.jade file or the index.jade file, ensure that the first two lines in both files are left-justified, with no spaces. If there are spaces before the first two lines, remove them, save both files, and then refresh your browser window.
+ > If you receive an error about the indent in the *layout.jade* file or the *index.jade* file, ensure that the first two lines in both files are left-justified, with no spaces. If there are spaces before the first two lines, remove them, save both files, and then refresh your browser window.
-2. Use the Item, Item Name, and Category fields to enter a new task, and then select **Add Item**. It creates a document in Azure Cosmos DB with those properties.
+1. Use the Item Name and Item Category fields to enter a new task, and then select **Add Item** to create a document in Azure Cosmos DB with those properties.
-3. The page should update to display the newly created item in the ToDo
- list.
-
- :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-added-task.png" alt-text="Screenshot of the application with a new item in the ToDo list":::
+1. The page updates to display the newly created item in the ToDo list.
-4. To complete a task, select the check box in the Complete column,
- and then select **Update tasks**. It updates the document you already created and removes it from the view.
+ :::image type="content" source="./media/tutorial-nodejs-web-app/cosmos-db-node-js-added-task.png" alt-text="Screenshot of the application with a new item in the ToDo list.":::
-5. To stop the application, press CTRL+C in the terminal window and then select **Y** to terminate the batch job.
+1. To complete a task, select the check box in the Complete column, and then select **Update tasks** to update the document you already created and remove it from the view.
+
+1. To stop the application, press CTRL+C in the terminal window and then select **y** to terminate the batch job.
## <a name="deploy-app"></a>Deploy your application to App Service
-After your application succeeds locally, you can deploy it to Azure App Service. In the terminal, make sure you're in the *todo* app directory. Deploy the code in your local folder (todo) using the following [az webapp up](/cli/azure/webapp#az-webapp-up) command:
+After your application succeeds locally, you can deploy it to Azure App Service. In the terminal, make sure you're in the *todo* app directory. Deploy the code in your local folder (todo) by using the following [az webapp up](/cli/azure/webapp#az-webapp-up) command:
```azurecli az webapp up --sku F1 --name <app-name> ```
-Replace <app_name> with a name that's unique across all of Azure (valid characters are a-z, 0-9, and -). A good pattern is to use a combination of your company name and an app identifier. To learn more about the app deployment, see [Node.js app deployment in Azure](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-cli#deploy-to-azure) article.
+Replace <app_name> with a name that's unique across all of Azure (valid characters are a-z, 0-9, and -). A good pattern is to use a combination of your company name and an app identifier. To learn more about the app deployment, see [Node.js app deployment in Azure](../../app-service/quickstart-nodejs.md?tabs=linux&pivots=development-environment-cli#deploy-to-azure).
-The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives you a URL to launch the app at `http://<app-name>.azurewebsites.net`, which is the app's URL on Azure.
+The command might take a few minutes to complete. The command provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging and doing ZIP deployment. The command provides these messages while running. It then gives you a URL to launch the app at `http://<app-name>.azurewebsites.net`, which is the app's URL on Azure.
## Clean up resources
-When these resources are no longer needed, you can delete the resource group, Azure Cosmos DB account, and all the related resources. To do so, select the resource group that you used for the Azure Cosmos DB account, select **Delete**, and then confirm the name of the resource group to delete.
+When these resources are no longer needed, you can delete the resource group, the Azure Cosmos DB account, and all the related resources. To do so, select the resource group that you used for the Azure Cosmos DB account, select **Delete**, and then confirm the name of the resource group to delete.
## Next steps
-* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
- * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+You can use information about your existing database cluster for capacity planning.
+
+* [Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s](../convert-vcore-to-request-unit.md)
+* [Estimate RU/s using the Azure Cosmos DB capacity planner - API for NoSQL](estimate-ru-with-capacity-planner.md)
> [!div class="nextstepaction"] > [Build mobile applications with Xamarin and Azure Cosmos DB](/azure/architecture/solution-ideas/articles/gaming-using-cosmos-db)
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-metrics.md
Title: Monitor and debug with insights in Azure Cosmos DB
-description: Use metrics in Azure Cosmos DB to debug common issues and monitor the database.
+description: Learn how to use metrics and insights in Azure Cosmos DB to debug common issues and monitor the database.
Previously updated : 11/08/2021 Last updated : 03/13/2023 + # Monitor and debug with insights in Azure Cosmos DB+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Azure Cosmos DB provides insights for throughput, storage, consistency, availability, and latency. The Azure portal provides an aggregated view of these metrics. You can also view Azure Cosmos DB metrics from Azure Monitor API. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn about how to view metrics from Azure monitor, see the [Get metrics from Azure Monitor](./monitor.md) article.
+Azure Cosmos DB provides insights for throughput, storage, consistency, availability, and latency. The Azure portal provides an aggregated view of these metrics. You can also view Azure Cosmos DB metrics from Azure Monitor API. The dimension values for the metrics such as container name are case-insensitive. Therefore, you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn how to view metrics from Azure monitor, see [Monitor Azure Cosmos DB](./monitor.md).
This article walks through common use cases and how Azure Cosmos DB insights can be used to analyze and debug these issues. By default, the metric insights are collected every five minutes and are kept for seven days.
This article walks through common use cases and how Azure Cosmos DB insights can
1. You can view your account metrics either from the **Metrics** pane or the **Insights** pane.
- * **Metrics:** This pane provides numerical metrics that are collected at regular intervals and describe some aspect of a system at a particular time. For example, you can view and monitor the [server side latency metric](monitor-server-side-latency.md), [normalized request unit usage metric](monitor-normalized-request-units.md) etc.
+ * **Metrics:** This pane provides numerical metrics that are collected at regular intervals and describes some aspect of a system at a particular time. For example, you can view and monitor the [server side latency metric](monitor-server-side-latency.md), [normalized request unit usage metric](monitor-normalized-request-units.md), etc.
- * **Insights:** This pane provides a customized monitoring experience for Azure Cosmos DB. They use the same metrics and logs that are collected in Azure Monitor and shows an aggregated view for your account.
+ * **Insights:** This pane provides a customized monitoring experience for Azure Cosmos DB. Insights use the same metrics and logs that are collected in Azure Monitor and show an aggregated view for your account.
-1. Open the **Insights** pane. By default, the Insights pane shows the throughput, requests, storage, availability, latency, system, and account management metrics for every container in your account. You can select the **Time Range**, **Database**, and **Container** for which you want to view insights. The **Overview** tab shows RU/s usage, data usage, index usage, throttled requests, and normalized RU/s consumption for the selected database and container.
+1. Open the **Insights** pane. By default, the Insights pane shows the throughput, requests, storage, availability, latency, system, and management operations metrics for every container in your account. You can select the **Time Range**, **Database**, and **Container** for which you want to view insights. The **Overview** tab shows RU/s usage, data usage, index usage, throttled requests, and normalized RU/s consumption for the selected database and container.
- :::image type="content" source="./media/use-metrics/performance-metrics.png" alt-text="Azure Cosmos DB performance metrics in Azure portal" lightbox="./media/use-metrics/performance-metrics.png" :::
+ :::image type="content" source="./media/use-metrics/performance-metrics.png" alt-text="Screenshot of Azure Cosmos DB performance metrics in the Azure portal." lightbox="./media/use-metrics/performance-metrics.png" :::
1. The following metrics are available from the **Insights** pane:
- * **Throughput** - This tab shows the total number of request units consumed or failed (429 response code) because the throughput or storage capacity provisioned for the container has exceeded.
+ * **Throughput**. This tab shows the total number of request units consumed or failed (429 response code) because the throughput or storage capacity provisioned for the container has exceeded.
- * **Requests** - This tab shows the total number of requests processed by status code, by operation type, and the count of failed requests (429 response code). Requests fail when the throughput or storage capacity provisioned for the container exceeds.
+ * **Requests**. This tab shows the total number of requests processed by status code, by operation type, and the count of failed requests (429 response code). Requests fail when the throughput or storage capacity provisioned for the container exceeds.
- * **Storage** - This tab shows the size of data and index usage over the selected time period.
+ * **Storage**. This tab shows the size of data and index usage over the selected time period.
- * **Availability** - This tab shows the percentage of successful requests over the total requests per hour. The success rate is defined by the Azure Cosmos DB SLAs.
+ * **Availability**. This tab shows the percentage of successful requests over the total requests per hour. The Azure Cosmos DB SLAs defines the success rate.
- * **Latency** - This tab shows the read and write latency observed by Azure Cosmos DB in the region where your account is operating. You can visualize latency across regions for a geo-replicated account. You can also view server-side latency by different operations. This metric doesn't represent the end-to-end request latency.
+ * **Latency**. This tab shows the read and write latency observed by Azure Cosmos DB in the region where your account is operating. You can visualize latency across regions for a geo-replicated account. You can also view server-side latency by different operations. This metric doesn't represent the end-to-end request latency.
- * **System** - This tab shows how many metadata requests are served by the primary partition. It also helps to identify the throttled requests.
+ * **System**. This tab shows how many metadata requests that the primary partition serves. It also helps to identify the throttled requests.
- * **Account management** - This tab shows the metrics for account management activities such as account creation, deletion, key updates, network and replication settings.
+ * **Management Operations**. This tab shows the metrics for account management activities such as account creation, deletion, key updates, network and replication settings.
The following sections explain common scenarios where you can use Azure Cosmos DB metrics. ## Understand how many requests are succeeding or causing errors
-To get started, head to the [Azure portal](https://portal.azure.com) and navigate to the **Insights** blade. From this blade, open the **Requests** tab, it shows a chart with the total requests segmented by the status code and operation type. For more information about HTTP status codes, see [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb).
+To get started, head to the [Azure portal](https://portal.azure.com) and navigate to the **Insights** pane. From this pane, open the **Requests** tab. The Requests tab shows a chart with the total requests segmented by the status code and operation type. For more information about HTTP status codes, see [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb).
-The most common error status code is 429 (rate limiting/throttling). This error means that requests to Azure Cosmos DB are more than the provisioned throughput. The most common solution to this problem is to [scale up the RUs](./set-throughput.md) for the given collection.
+The most common error status code is 429 (rate limiting/throttling). This error means that requests to Azure Cosmos DB are more than the provisioned throughput. The most common solution to this problem is to scale up the RUs for the given collection. For more information, see [Introduction to provisioned throughput in Azure Cosmos DB](./set-throughput.md)
## Determine the throughput consumption by a partition key range
-Having a good cardinality of your partition keys is essential for any scalable application. To determine the throughput distribution of any partitioned container broken down by partition key range IDs, navigate to the **Insights** pane. Open the **Throughput** tab, the normalized RU/s consumption across different partition key ranges is shown in the chart.
+Having a good cardinality of your partition keys is essential for any scalable application. To determine the throughput distribution of any partitioned container broken down by partition key range IDs, navigate to the **Insights** pane. Open the **Throughput** tab. The normalized RU/s consumption across different partition key ranges is shown in the chart.
-With the help of this chart, you can identify if there is a hot partition. An uneven throughput distribution may cause *hot* partitions, which can result in throttled requests and may require repartitioning. After identifying which partition key is causing the skew in distribution, you may have to repartition your container with a more distributed partition key. For more information about partitioning in Azure Cosmos DB, see [Partition and scale in Azure Cosmos DB](./partitioning-overview.md).
+With the help of this chart, you can identify if there's a hot partition. An uneven throughput distribution might cause *hot* partitions, which can result in throttled requests and might require repartitioning. After identifying which partition key is causing the skew in distribution, you might have to repartition your container with a more distributed partition key. For more information about partitioning in Azure Cosmos DB, see [Partitioning and horizontal scaling in Azure Cosmos DB](./partitioning-overview.md).
## Determine the data and index usage
-It's important to determine the storage distribution of any partitioned container by data usage, index usage, and document usage. You can minimize the index usage, maximize the data usage and optimize your queries. To get this data, navigate to the **Insights** pane and open the **Storage** tab:
+It's important to determine the storage distribution of any partitioned container by data usage, index usage, and document usage. You can minimize the index usage, maximize the data usage, and optimize your queries. To get this data, navigate to the **Insights** pane and open the **Storage** tab.
## Compare data size against index size
-In Azure Cosmos DB, the total consumed storage is the combination of both the Data size and Index size. Typically, the index size is a fraction of the data size. To learn more, see the [Index size](index-policy.md#index-size) article. In the Metrics blade in the [Azure portal](https://portal.azure.com), the Storage tab showcases the breakdown of storage consumption based on data and index.
+In Azure Cosmos DB, the total consumed storage is the combination of both the data size and index size. Typically, the index size is a fraction of the data size. To learn more, see the [Index size](index-policy.md#index-size) article. In the Metrics pane in the [Azure portal](https://portal.azure.com), the Storage tab showcases the breakdown of storage consumption based on data and index.
```csharp // Measure the document size usage (which includes the index size)
ResourceResponse<DocumentCollection> collectionInfo = await client.ReadDocumentC
If you would like to conserve index space, you can adjust the [indexing policy](index-policy.md).
-## Debug why queries are running slow
+## Debug slow queries
In the API for NoSQL SDKs, Azure Cosmos DB provides query execution statistics.
FeedResponse<dynamic> result = await query.ExecuteNextAsync();
IReadOnlyDictionary<string, QueryMetrics> metrics = result.QueryMetrics; ```
-*QueryMetrics* provides details on how long each component of the query took to execute. The most common root cause for long running queries is scans, meaning the query was unable to leverage the indexes. This problem can be resolved with a better filter condition.
+*QueryMetrics* provides details on how long each component of the query took to execute. The most common root cause for long running queries is scans, meaning the query was unable to apply the indexes. This problem can be resolved with a better filter condition.
## Next steps
-You've now learned how to monitor and debug issues using the metrics provided in the Azure portal. You may want to learn more about improving database performance by reading the following articles:
+You might want to learn more about improving database performance by reading the following articles:
-* To learn about how to view metrics from Azure monitor, see the [Get metrics from Azure Monitor](./monitor.md) article.
-* [Performance and scale testing with Azure Cosmos DB](performance-testing.md)
-* [Performance tips for Azure Cosmos DB](performance-tips.md)
+* [Measure Azure Cosmos DB for NoSQL performance with a benchmarking framework](performance-testing.md)
+* [Performance tips for Azure Cosmos DB and .NET SDK v2](performance-tips.md)
cost-management-billing Azure Plan Subscription Transfer Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/azure-plan-subscription-transfer-partners.md
Previously updated : 05/03/2022 Last updated : 03/29/2023
This article helps customers of Microsoft partners to understand what they need
The steps that a partner takes are documented at [Transfer a customer's Azure subscriptions and/or Reservations (under an Azure plan) to a different CSP](/partner-center/transfer-azure-subscriptions-under-azure-plan). + ## User access Access to existing users, groups, or service principals that were assigned using Azure role-based access control (Azure RBAC) isn't affected during the transition. [Azure RBAC](../../role-based-access-control/overview.md) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. Your new partner isn't given any Azure RBAC access to your resources by the subscription transfer. Your previous partner keeps their Azure RBAC access.
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-subscription-transfer.md
tags: billing,top-support-issue
Previously updated : 05/04/2022 Last updated : 03/29/2023
Only the billing administrator of an account can transfer ownership of a subscri
When you send or accept a transfer request, you agree to terms and conditions. For more information, see [Transfer terms and conditions](subscription-transfer.md#transfer-terms-and-conditions). + ## Transfer billing ownership of an Azure subscription 1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator of the billing account that has the subscription that you want to transfer. If you're not sure if you're an administrator, or if you need to determine who is, see [Determine account billing administrator](add-change-subscription-administrator.md#whoisaa).
cost-management-billing Create Customer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-customer-subscription.md
Previously updated : 12/07/2022 Last updated : 03/29/2023
This article helps a Microsoft Partner with a [Microsoft Partner Agreement](http
To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](view-all-accounts.md). + ## Permission required to create Azure subscriptions You need the following permissions to create customer subscriptions:
cost-management-billing Create Enterprise Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-enterprise-subscription.md
Previously updated : 02/14/2023 Last updated : 03/29/2023
This article helps you create an [Enterprise Agreement (EA)](https://azure.micro
If you want to create subscriptions for Microsoft Customer Agreements, see [Create a Microsoft Customer Agreement subscription](create-subscription.md). If you're a Microsoft Partner and you want to create a subscription for a customer, see [Create a subscription for a partner's customer](create-customer-subscription.md). Or, if you have a Microsoft Online Service Program (MOSP) billing account, also called pay-as-you-go, you can create subscriptions starting in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and then you complete the process at https://signup.azure.com/. + To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](view-all-accounts.md). ## Permission required to create Azure subscriptions
cost-management-billing Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscription.md
Previously updated : 02/14/2023 Last updated : 03/29/2023
If you want to create a Microsoft Customer Agreement subscription in a different
If you want to create subscriptions for Enterprise Agreements, see [Create an EA subscription](create-enterprise-subscription.md). If you're a Microsoft Partner and you want to create a subscription for a customer, see [Create a subscription for a partner's customer](create-customer-subscription.md). Or, if you have a Microsoft Online Service Program (MOSP) billing account, also called pay-as-you-go, you can create subscriptions starting in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and then you complete the process at https://signup.azure.com/. + To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](view-all-accounts.md). ## Permission required to create Azure subscriptions
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
tags: billing
Previously updated : 03/23/2023 Last updated : 03/20/2023
When you send or accept a transfer request, you agree to terms and conditions. F
Before you transfer billing products, read [Supplemental information about transfers](subscription-transfer.md#supplemental-information-about-transfers). + ## Prerequisites >[!IMPORTANT]
cost-management-billing Mosp Ea Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mosp-ea-transfer.md
tags: billing
Previously updated : 12/13/2022 Last updated : 03/29/2023
This article helps you understand the steps needed to transfer an individual Microsoft Customer Agreement subscription (Azure offer MS-AZR-0017G pay-as-you-go) or a MOSP (pay-as-you-go) subscription (Azure offer MS-AZR-003P) to an EA. The transfer has no downtime, however there are many steps to follow to enable the transfer. + If you want to transfer a different subscription type to EA, see [Azure subscription and reservation transfer hub](subscription-transfer.md) for supported transfer options. > [!NOTE]
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mpa-request-ownership.md
tags: billing
Previously updated : 11/29/2022 Last updated : 03/29/2023
There are three options to transfer products:
- Transfer only reservations - Transfer both subscriptions and reservations + ## Prerequisites 1. Establish [reseller relationship](/partner-center/request-a-relationship-with-a-customer) with the customer.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
This article describes the types of supported transfers for Azure subscriptions,
This article also helps you understand the things you should know _before_ you transfer billing ownership of an Azure product to another account. You might want to transfer billing ownership of your Azure product if you're leaving your organization, or you want your product to be billed to another account. Transferring billing ownership to another account provides the administrators in the new account permission for billing tasks. They can change the payment method, view charges, and cancel the product. + If you want to keep the billing ownership but change the type of product, see [Switch your Azure subscription to another offer](switch-azure-offer.md). To control who can access resources in the product, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). If you're an Enterprise Agreement (EA) customer, your enterprise administrators can transfer billing ownership of your products between accounts in the EA portal. For more information, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership).
cost-management-billing Transfer Subscriptions Subscribers Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md
Previously updated : 10/19/2022 Last updated : 03/29/2023
This article provides high-level steps used to transfer Azure subscriptions to a
Download or export cost and billing information that you want to keep before you start a transfer request. Billing and utilization information doesn't transfer with the subscription. For more information about exporting cost management data, see [Create and manage exported data](../costs/tutorial-export-acm-data.md). For more information about downloading your invoice and usage data, see [Download or view your Azure billing invoice and daily usage data](download-azure-invoice-daily-usage-date.md). + ## Transfer EA or MCA enterprise subscriptions to a CSP partner CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure subscriptions for their customers. The customers must have a Direct Enterprise Agreement (EA) or a Microsoft account team (Microsoft Customer Agreement enterprise). Subscription transfers are allowed only for customers who have accepted an MCA and purchased an Azure plan with the CSP Program.
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
To learn more, see [Azure Data Factory overview](introduction.md) or [Azure Syna
## Overview
-When you perform data integration and ETL processes in the cloud, your jobs can perform much better and be more effective when you only read the source data that has changed since the last time the pipeline ran, rather than always querying an entire dataset on each run. ADF provides multiple different ways for you to easily get delta data only from the last run.
+When you perform data integration and ETL processes in the cloud, your jobs can perform better and be more effective when you only read the source data that has changed since the last time the pipeline ran, rather than always querying an entire dataset on each run. ADF provides multiple different ways for you to easily get delta data only from the last run.
### Change Data Capture factory resource
-The easiest and quickest way to get started in data factory with CDC is through the factory level Change Data Capture resource. From the main pipeline designer, click on New under Factory Resources to create a new Change Data Capture. The CDC factory resource will provide a configuration walk-through experience where you will point to your sources and destinations, apply optional transformations, and then click start to begin your data capture. With the CDC resource, you will not need to design pipelines or data flow activities and the only billing will be 4 cores of General Purpose data flows while your data in being processed. You set a latency which ADF will use to wake-up and look for changed data. That is the only time you will be billed. The top-level CDC resource is also the ADF method of running your processes continuously. Pipelines in ADF are batch only. But the CDC resource can run continuously.
+The easiest and quickest way to get started in data factory with CDC is through the factory level Change Data Capture resource. From the main pipeline designer, click on **New** under Factory Resources to create a new Change Data Capture. The CDC factory resource provides a configuration walk-through experience where you can select your sources and destinations, apply optional transformations, and then click start to begin your data capture. With the CDC resource, you do not need to design pipelines or data flow activities. You are also only billed for four cores of General Purpose data flows while your data in being processed. You can set a preferred latency, which ADF will use to wake up and look for changed data. That is the only time you will be billed. The top-level CDC resource is also the ADF method of running your processes continuously. Pipelines in ADF are batch only, but the CDC resource can run continuously.
### Native change data capture in mapping data flow
-The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you will see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data. When defining your sink data destination, you can set insert, update, upsert, and delete operations in your sink without the need of an Alter Row transformation because ADF is able to automatically detect the row makers.
+The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you can see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data. When defining your sink data destination, you can set insert, update, upsert, and delete operations in your sink without the need of an Alter Row transformation because ADF is able to automatically detect the row makers.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5bkg2]
The changed data including inserted, updated and deleted rows can be automatical
- [SQL Server](connector-sql-server.md) - [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md) - [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md)
+- [Azure Cosmos DB analytical store](../cosmos-db/analytical-store-introduction.md)
### Auto incremental extraction in mapping data flow
You can always build your own delta data extraction pipeline for all ADF support
**Change files capture from file based storages** -- When you want to load data from Azure Blob Storage, Azure Data Lake Storage Gen2 or Azure Data Lake Storage Gen1, mapping data flow provides you the opportunity to get new or updated files only by simple one click. It is the simplest and recommended way for you to achieve delta load from these file based storages in mapping data flow.
+- When you want to load data from Azure Blob Storage, Azure Data Lake Storage Gen2 or Azure Data Lake Storage Gen1, mapping data flow provides you with the opportunity to get new or updated files only by simple one click. It is the simplest and recommended way for you to achieve delta load from these file based storages in mapping data flow.
- You can get more [best practices](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/best-practices-of-how-to-use-adf-copy-activity-to-copy-new-files/ba-p/1532484).
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md
When you debug the pipeline, this feature works the same. Be aware that the chec
In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+In addition, Azure Cosmos DB analytical store now supports Change Data Capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for Mongo DB (public preview). Azure Cosmos DB analytical store allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store.
+++ ## Next steps For a list of data stores that Copy Activity supports as sources and sinks, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Troubleshoot Ftp Sftp Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-ftp-sftp-http.md
Previously updated : 08/12/2022 Last updated : 03/27/2023
This article provides suggestions to troubleshoot common problems with the FTP,
## SFTP
-#### Error code: SftpOperationFail
+### Error code: SftpOperationFail
- **Message**: `Failed to '%operation;'. Check detailed error from SFTP.`
This article provides suggestions to troubleshoot common problems with the FTP,
- **Recommendation**: Get a valid fingerprint using the Host Key Name in `real finger-print` from the error message in the SFTP server. You can run the command to get the fingerprint on your SFTP server. For example: run `ssh-keygen -E md5 -lf <keyFilePath>` in Linux server to get the fingerprint. The command may vary among different server types.
+### Error code: UnsupportedCompressionTypeWhenDisableChunking
+
+- **Message**: `"Disable chunking" is not compatible with "ZipDeflate" decompression.`
+
+- **Cause**: **Disable chunking** is not compatible with **ZipDeflate** decompression.
+
+- **Recommendation**: Load the binary data to a staging area (for example: Azure Blob Storage) and decompress them in another copy activity.
+ ## HTTP ### Error code: HttpFileFailedToRead
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
For commercial and national cloud coverage, see the [features supported in diffe
## Defender CSPM plan options
-Defender for cloud offers foundational multicloud CSPM capabilities for free. These capabilities are automatically enabled by default on any subscription or account that has onboarded to Defender for Cloud. The foundational CSPM includes asset discovery, continuous assessment and security recommendations for posture hardening, compliance with Microsoft Cloud Security Benchmark (MCSB), and a [Secure score](secure-score-access-and-track.md) which measure the current status of your organization's posture.
+Defender for Cloud offers foundational multicloud CSPM capabilities for free. These capabilities are automatically enabled by default on any subscription or account that has onboarded to Defender for Cloud. The foundational CSPM includes asset discovery, continuous assessment and security recommendations for posture hardening, compliance with Microsoft Cloud Security Benchmark (MCSB), and a [Secure score](secure-score-access-and-track.md) which measure the current status of your organization's posture.
The optional Defender CSPM plan, provides advanced posture management capabilities such as [Attack path analysis](how-to-manage-attack-path.md), [Cloud security explorer](how-to-manage-cloud-security-explorer.md), advanced threat hunting, [security governance capabilities](concept-regulatory-compliance.md), and also tools to assess your [security compliance](review-security-recommendations.md) with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region.
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
The table summarizes support for data-aware posture management.
What Azure data resources can I scan? | Azure storage accounts v1, v2<br/><br/> Azure Data Lake Storage Gen1/Gen2<br/><br/>Accounts are supported behind private networks but not behind private endpoints.<br/><br/> Defender for Cloud can discover data encrypted by KMB or a customer-managed key. <br/><br/>Page blobs aren't scanned. What AWS data resources can I scan? | AWS S3 buckets<br/><br/> Defender for Cloud can scan encrypted data, but not data encrypted with a customer-managed key. What permissions do I need for scanning? | Storage account: Subscription Owner or Microsoft.Storage/storageaccounts/{read/write} and Microsoft.Authorization/roleAssignments/{read/write/delete}<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).
-What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .cvs, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.
+What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.
What Azure regions are supported? | You can scan Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Scanning is done locally in the region. What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Scanning is done locally in the region. Do I need to install an agent? | No, scanning is agentless.
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
You can then review the progress of the tasks by subscription, recommendation, o
|Aspect|Details| |-|:-|
-|Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
+|Release state:|General availability (GA)|
| Prerequisite: | Requires the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) to be enabled.| |Required roles and permissions:|Azure - **Contributor**, **Security Admin**, or **Owner** on the subscription<br>AWS, GCP ΓÇô **Contributor**, **Security Admin**, or **Owner** on the connector| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP accounts|
The governance report lets you select subscriptions that have governance rules a
**To review the status of the recommendations in a rule**:
-1. In **Recommendations**, select **Governance report (preview)**.
+1. In **Recommendations**, select **Governance report**.
1. Select the subscriptions that you want to review. 1. Select the rules that you want to see details about.
You can see the list of owners and recommendations for the selected rules, and t
**To see the list of recommendations for each owner**: 1. Select **Security posture**.
-1. Select the **Owner (preview)** tab to see the list of owners and the number of overdue recommendations for each owner.
+1. Select the **Owner** tab to see the list of owners and the number of overdue recommendations for each owner.
- Hover over the (i) in the overdue recommendations to see the breakdown of overdue recommendations by severity.
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Microsoft Defender for Cloud includes a bundle of recommendations that are avail
## Enable Kubernetes data plane hardening
-When you enable Microsoft Defender for Containers, Azure Kubernetes Service clusters, and Azure Arc enabled Kubernetes clusters (Preview) protection are both enabled by default. You can configure your Kubernetes data plane hardening, when you enable Microsoft Defender for Containers.
+When you enable Microsoft Defender for Containers, the "Azure Policy for Kubernetes" setting is enabled by default for the Azure Kubernetes Service, and for Azure Arc-enabled Kubernetes clusters in the relevant subscription. If you disable the setting, you can re-enable it later. Either in the Defender for Containers plan settings, or with  Azure Policy.
-**To enable Azure Kubernetes Service clusters and Azure Arc enabled Kubernetes clusters (Preview)**:
+When you enable this setting, the Azure Policy for Kubernetes pods are installed on the cluster. This allocates a small amount of CPU and memory for the pods to use. This allocation might reach maximum capacity, but it doesn't affect the rest of the CPU and memory on the resource.
+
+To enable Azure Kubernetes Service clusters and Azure Arc enabled Kubernetes clusters (Preview):
1. Sign in to the [Azure portal](https://portal.azure.com).
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
To investigate why the recommendations are still being generated, verify the fol
### We're using a third-party MFA tool to enforce MFA. Why do we still get the Defender for Cloud recommendations? Defender for Cloud's MFA recommendations doesn't support third-party MFA tools (for example, DUO).
-If the recommendations are irrelevant for your organization, consider marking them as "mitigated" as described in [Exempting resources and recommendations from your secure score](exempt-resource.md). You can also [disable a recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
+If the recommendations are irrelevant for your organization, consider marking them as "mitigated" as described in [Exempting resources and recommendations from your secure score](exempt-resource.md). You can also [disable a recommendation](tutorial-security-policy.md#disable-a-security-recommendation).
### Why does Defender for Cloud show user accounts without permissions on the subscription as "requiring MFA"? Defender for Cloud's MFA recommendations refers to [Azure RBAC](../role-based-access-control/role-definitions-list.md) roles and the [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md) role. Verify that none of the accounts have such roles.
defender-for-cloud Plan Multicloud Security Define Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-define-adoption-strategy.md
Think about your broad requirements:
- **Determine business needs**. Keep first steps simple, and then iterate to accommodate future change. Decide your goals for a successful adoption, and then the metrics youΓÇÖll use to define success. - **Determine ownership**. Figure out where multicloud capabilities fall under your teams. Review the [determine ownership requirements](plan-multicloud-security-determine-ownership-requirements.md#determine-ownership-requirements) and [determine access control requirements](plan-multicloud-security-determine-access-control-requirements.md#determine-access-control-requirements) articles to answer these questions:
- - How will your organization use Defender for cloud as a multicloud solution?
+ - How will your organization use Defender for Cloud as a multicloud solution?
- What [cloud security posture management (CSPM)](plan-multicloud-security-determine-multicloud-dependencies.md) and [cloud workload protection (CWP)](plan-multicloud-security-determine-multicloud-dependencies.md) capabilities do you want to adopt? - Which teams will own the different parts of Defender for Cloud? - What is your process for responding to security alerts and recommendations? Remember to consider Defender for CloudΓÇÖs governance feature when making decisions about recommendation processes.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Follow the steps below to create your GCP cloud connector.
| CSPM | Defender for Containers| |--|--|
- | CSPM service account reader role <br> Microsoft Defender for Cloud identity federation <br> CSPM identity pool <br>*Microsoft Defender for Servers* service account (when the servers plan is enabled) <br>*Azure-Arc for servers onboarding* service account (when the Arc for servers auto-provisioning is enabled) | Microsoft Defender ContainersΓÇÖ service account role <br> Microsoft Defender Data Collector service account role <br> Microsoft Defender for cloud identity pool |
+ | CSPM service account reader role <br> Microsoft Defender for Cloud identity federation <br> CSPM identity pool <br>*Microsoft Defender for Servers* service account (when the servers plan is enabled) <br>*Azure-Arc for servers onboarding* service account (when the Arc for servers auto-provisioning is enabled) | Microsoft Defender ContainersΓÇÖ service account role <br> Microsoft Defender Data Collector service account role <br> Microsoft Defender for Cloud identity pool |
After creating a connector, a scan will start on your GCP environment. New recommendations will appear in Defender for Cloud after up to 6 hours. If you enabled auto-provisioning, Azure Arc and any enabled extensions will install automatically for each new resource detected.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Security Center includes multiple recommendations to encrypt data at rest with c
Data in Azure is encrypted automatically using platform-managed keys, so the use of customer-managed keys should only be applied when required for compliance with a specific policy your organization is choosing to enforce.
-With this change, the recommendations to use CMKs are now **disabled by default**. When relevant for your organization, you can enable them by changing the *Effect* parameter for the corresponding security policy to **AuditIfNotExists** or **Enforce**. Learn more in [Enable a security policy](tutorial-security-policy.md#enable-a-security-policy).
+With this change, the recommendations to use CMKs are now **disabled by default**. When relevant for your organization, you can enable them by changing the *Effect* parameter for the corresponding security policy to **AuditIfNotExists** or **Enforce**. Learn more in [Enable a security recommendation](tutorial-security-policy.md#enable-a-security-recommendation).
This change is reflected in the names of the recommendation with a new prefix, **[Enable if required]**, as shown in the following examples:
Learn more in [Identify vulnerable container images in your CI/CD workflows](def
### More Resource Graph queries available for some recommendations
-All of Security Center's recommendations have the option to view the information about the status of affected resources using Azure Resource Graph from the **Open query**. For full details about this powerful feature, see [Review recommendation data in Azure Resource Graph Explorer (ARG)](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-explorer-arg).
+All of Security Center's recommendations have the option to view the information about the status of affected resources using Azure Resource Graph from the **Open query**. For full details about this powerful feature, see [Review recommendation data in Azure Resource Graph Explorer (ARG)](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).
Security Center includes built-in vulnerability scanners to scan your VMs, SQL servers and their hosts, and container registries for security vulnerabilities. The findings are returned as recommendations with all the individual findings for each resource type gathered into a single view. The recommendations are:
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
To get to the list of recommendations:
- In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment you want to improve. - Go to **Recommendations** in the Defender for Cloud menu.
-You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations. Look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
+You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations. Look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-a-security-recommendation).
You can learn more by watching this video from the Defender for Cloud in the Field video series: - [Security posture management improvements](episode-four.md)
To change the owner of resources and set the ETA for remediation of recommendati
The due date for the recommendation doesn't change, but the security team can see that you plan to update the resources by the specified ETA date.
-## Review recommendation data in Azure Resource Graph Explorer (ARG)
+## Review recommendation data in Azure Resource Graph (ARG)
You can review recommendations in ARG both on the Recommendations page or on an individual recommendation.
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
You can also [configure the Enforce and Deny options](prevent-misconfigurations.
The table below lists the security controls in Microsoft Defender for Cloud. For each control, you can see the maximum number of points you can add to your secure score if you remediate *all* of the recommendations listed in the control, for *all* of your resources.
-The set of security recommendations provided with Defender for Cloud is tailored to the available resources in each organization's environment. You can [disable policies](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations) and [exempt specific resources from a recommendation](exempt-resource.md) to further customize the recommendations.
+The set of security recommendations provided with Defender for Cloud is tailored to the available resources in each organization's environment. You can [disable recommendations](tutorial-security-policy.md#disable-a-security-recommendation) and [exempt specific resources from a recommendation](exempt-resource.md) to further customize the recommendations.
We recommend every organization carefully reviews their assigned Azure Policy initiatives. > [!TIP]
-> For details about reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md).
+> For details about reviewing and editing your initiatives, see [manage security policies](tutorial-security-policy.md).
Even though Defender for Cloud's default security initiative, the Azure Security Benchmark, is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. It's sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br>
No. It won't change until you remediate all of the recommendations for a single
### If a recommendation isn't applicable to me, and I disable it in the policy, will my security control be fulfilled and my secure score updated?
-Yes. We recommend disabling recommendations when they're inapplicable in your environment. For instructions on how to disable a specific recommendation, see [Disable security policies](./tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
+Yes. We recommend disabling recommendations when they're inapplicable in your environment. For instructions on how to disable a specific recommendation, see [Disable security recommendations](./tutorial-security-policy.md#disable-a-security-recommendation).
### If a security control offers me zero points towards my secure score, should I ignore it?
defender-for-cloud Security Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-policy-concept.md
If you're reviewing the list of recommendations on our [Security recommendations
This page explained, at a high level, the basic concepts and relationships between policies, initiatives, and recommendations. For related information, see: - [Create custom initiatives](custom-security-policies.md)-- [Disable security policies to disable recommendations](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations)
+- [Disable security recommendations](tutorial-security-policy.md#disable-a-security-recommendation)
- [Learn how to edit a security policy in Azure Policy](../governance/policy/tutorials/create-and-manage.md)
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
Title: Working with security policies description: Learn how to work with security policies in Microsoft Defender for Cloud. - Previously updated : 01/24/2023 Last updated : 01/25/2022 # Manage security policies
To understand the relationships between initiatives, policies, and recommendatio
## Who can edit security policies?
-Defender for Cloud uses Azure role-based access control (Azure RBAC), which provides built-in roles you can assign to Azure users, groups, and services. When users open Defender for Cloud, they see only information related to the resources they can access. Which means users are assigned the role of *owner*, *contributor*, or *reader* to the resource's subscription. There are also two specific Defenders for Cloud roles:
+Defender for Cloud uses Azure role-based access control (Azure RBAC), which provides built-in roles you can assign to Azure users, groups, and services. When users open Defender for Cloud, they see only information related to the resources they can access. Which means users are assigned the role of *owner*, *contributor*, or *reader* to the resource's subscription. There are two specific Defender for Cloud roles that can view and manage security policies:
- **Security reader**: Has rights to view Defender for Cloud items such as recommendations, alerts, policy, and health. Can't make changes. - **Security admin**: Has the same view rights as *security reader*. Can also update the security policy and dismiss alerts.
-You can edit security policies through the Azure Policy portal, via REST API or using Windows PowerShell.
+You can edit Azure security policies through Defender for Cloud, Azure Policy, via REST API or using PowerShell.
## Manage your security policies To view your security policies in Defender for Cloud:
-1. From Defender for Cloud's menu, open the **Environment settings** page. Here, you can see the management groups, subscriptions, and the initiatives applied to each.
+1. From Defender for Cloud's menu, open the **Environment settings** page. Here, you can see the Azure management groups or subscriptions.
-1. Select the relevant subscription or management group whose policies you want to view.
+1. Select the relevant subscription or management group whose security policies you want to view.
1. Open the **Security policy** page. 1. The security policy page for that subscription or management group appears. It shows the available and assigned policies.
- :::image type="content" source="./media/tutorial-security-policy/security-policy-page.png" alt-text="Defender for Cloud's security policy page" lightbox="./media/tutorial-security-policy/security-policy-page.png":::
+ :::image type="content" source="./media/tutorial-security-policy/security-policy-page.png" alt-text="Screenshot showing a subscription's security policy settings page." lightbox="./media/tutorial-security-policy/security-policy-page.png":::
> [!NOTE]
- > If there is a label "MG Inherited" alongside your default initiative, it means that the initiative has been assigned to a management group and inherited by the subscription you're viewing.
+ > The settings of each recommendation that apply to the scope are compared and the cumulative outcome of actions taken by the recommendation appears. For example, if in one assignment, a recommendation is Disabled, but in another it's set to Audit, then the cumulative effect applies Audit. The more active effect always takes precedence.
1. Choose from the available options on this page:
To view your security policies in Defender for Cloud:
1. To view and edit the default initiative, select it and proceed as described below.
+ :::image type="content" source="./media/tutorial-security-policy/policy-screen.png" alt-text="Screenshot showing the security policy settings for a subscription, focusing on the default assignment." lightbox="./media/tutorial-security-policy/policy-screen.png":::
+ This **Security policy** screen reflects the action taken by the policies assigned on the subscription or management group you selected.
- * Use the links at the top to open a policy **assignment** that applies on the subscription or management group. These links let you access the assignment and edit or disable the policy. For example, if you see that a particular policy assignment is effectively denying endpoint protection, use the link to edit or disable the policy.
+ * Use the links at the top to open a policy **assignment** that applies on the subscription or management group. These links let you access the assignment and manage recommendations. For example, if you see that a particular recommendation is set to audit effect, use to change it to deny or disable from being evaluated.
- * In the list of policies, you can see the effective application of the policy on your subscription or management group. The settings of each policy that apply to the scope are taken into consideration and the cumulative outcome of actions taken by the policy is shown. For example, if in one assignment of the policy is disabled, but in another it's set to AuditIfNotExist, then the cumulative effect applies AuditIfNotExist. The more active effect always takes precedence.
+ * In the list of recommendations, you can see the effective application of the recommendation on your subscription or management group.
- * The policies' effect can be: Append, Audit, AuditIfNotExists, Deny, DeployIfNotExists, Disabled. For more information on how effects are applied, see [Understand Policy effects](../governance/policy/concepts/effects.md).
+ * The recommendations' effect can be:
+
+ **Audit** evaluates the compliance state of resources according to recommendation logic.<br>
+ **Deny** prevents deployment of non-compliant resources based on recommendation logic.<br>
+ **Disabled** prevents the recommendation from running.
+
+ :::image type="content" source="./media/tutorial-security-policy/default-assignment-screen.png" alt-text="Screenshot showing the edit default assignment screen." lightbox="/media/tutorial-security-policy/default-assignment-screen.png":::
+
+## Enable a security recommendation
+
+Some recommendations might be disabled by default. For example, in the Azure Security Benchmark initiative, some recommendations are provided for you to enable only if they meet a specific regulatory or compliance requirement for your organization. For example: recommendations to encrypt data at rest with customer-managed keys, such as "Container registries should be encrypted with a customer-managed key (CMK)".
+
+To enable a disabled recommendation and ensure it's assessed for your resources:
- > [!NOTE]
- > When you view assigned policies, you can see multiple assignments and you can see how each assignment is configured on its own.
+1. From Defender for Cloud's menu, open the **Environment settings** page.
+1. Select the subscription or management group for which you want to disable a recommendation.
-## Disable security policies and disable recommendations
+1. Open the **Security policy** page.
-When your security initiative triggers a recommendation that's irrelevant for your environment, you can prevent that recommendation from appearing again. To disable a recommendation, disable the specific policy that generates the recommendation.
+1. From the **Default initiative** section, select the relevant initiative.
-The recommendation you want to disable will still appear if it's required for a regulatory standard you've applied with Defender for Cloud's regulatory compliance tools. Even if you've disabled a policy in the built-in initiative, a policy in the regulatory standard's initiative will still trigger the recommendation if it's necessary for compliance. You can't disable policies from regulatory standard initiatives.
+1. Search for the recommendation that you want to disable, either by the search bar or filters.
-For more information about recommendations, see [Managing security recommendations](review-security-recommendations.md).
+1. Select the ellipses menu, select **Manage effect and parameters**.
+1. From the effect section, select **Audit**.
-1. From Defender for Cloud's menu, open the **Environment settings** page. Here, you can see the management groups, subscriptions, and the initiatives applied to each.
+1. Select **Save**.
-1. Select the subscription or management group for which you want to disable the recommendation (and policy).
+ :::image type="content" source="./media/tutorial-security-policy/enable-security-recommendation.png" alt-text="Screenshot showing the manage effect and parameters screen for a given recommendation." lightbox="./media/tutorial-security-policy/enable-security-recommendation.png":::
> [!NOTE]
- > Remember that a management group applies its policies to its subscriptions. Therefore, if you disable a subscription's policy, and the subscription belongs to a management group that still uses the same policy, then you will continue to receive the policy recommendations. The policy will still be applied from the management level and the recommendations will still be generated.
+ > Setting will take effect immediately, but recommendations will update based on their freshness interval (up to 12 hours).
+
+## Manage a security recommendation's settings
+
+It may be necessary to configure additional parameters for some recommendations.
+As an example, diagnostic logging recommendations have a default retention period of 1 day. You can change the default value if your organizational security requirements require logs to be kept for more than that, for example: 30 days.
+The **additional parameters** column indicates whether a recommendation has associated additional parameters:
+
+**Default** ΓÇô the recommendation is running with default configuration<br>
+**Configured** ΓÇô the recommendationΓÇÖs configuration is modified from its default values<br>
+**None** ΓÇô the recommendation doesn't require any additional configuration
+
+1. From Defender for Cloud's menu, open the **Environment settings** page.
+
+1. Select the subscription or management group for which you want to disable a recommendation.
1. Open the **Security policy** page.
-1. From the **Default initiative** or **Your custom initiatives** sections, select the relevant initiative containing the policy you want to disable.
+1. From the **Default initiative** section, select the relevant initiative.
+
+1. Search for the recommendation that you want to configure.
-1. Open the **Parameters** section and search for the policy that invokes the recommendation that you want to disable.
+ > [!TIP]
+ > To view all available recommendations with additional parameters, using the filters to view the **Additional parameters** column and then default.
-1. From the dropdown list, change the value for the corresponding policy to **Disabled**.
+1. Select the ellipses menu and select **Manage effect and parameters**.
- ![disable policy.](./media/tutorial-security-policy/disable-policy.png)
+1. From the additional parameters section, configure the available parameters with new values.
1. Select **Save**.
- > [!NOTE]
- > The change might take up to 12 hours to take effect.
+ :::image type="content" source="./media/tutorial-security-policy/additional-parameters.png" alt-text="Screenshot showing where to configure additional parameters on the manage effect and parameters screen." lightbox="./media/tutorial-security-policy/additional-parameters.png":::
+
+Use the "reset to default" button to revert changes per the recommendation and restore the default value.
+## Disable a security recommendation
-## Enable a security policy
+When your security policy triggers a recommendation that's irrelevant for your environment, you can prevent that recommendation from appearing again. To disable a recommendation, select an initiative and change its settings to disable relevant recommendations.
-Some policies in your initiatives might be disabled by default. For example, in the Microsoft cloud security benchmark initiative, some policies are provided for you to enable only if they meet a specific regulatory or compliance requirement for your organization. Such policies include recommendations to encrypt data at rest with customer-managed keys, such as "Container registries should be encrypted with a customer-managed key (CMK)".
+The recommendation you want to disable will still appear if it's required for a regulatory standard you've applied with Defender for Cloud's regulatory compliance tools. Even if you've disabled a recommendation in the built-in initiative, a recommendation in the regulatory standard's initiative will still trigger the recommendation if it's necessary for compliance. You can't disable recommendations from regulatory standard initiatives.
-To enable a disabled policy and ensure it's assessed for your resources:
+Learn more about [managing security recommendations](review-security-recommendations.md).
-1. From Defender for Cloud's menu, open the **Environment settings** page. Here, you can see the management groups, subscriptions, and the initiatives applied to each.
+1. From Defender for Cloud's menu, open the **Environment settings** page.
-1. Select the subscription or management group for which you want to enable the recommendation (and policy).
+1. Select the subscription or management group for which you want to enable a recommendation.
+
+ > [!NOTE]
+ > Remember that a management group applies its settings to its subscriptions. If you disabled a subscription's recommendation, and the subscription belongs to a management group that still uses the same settings, then you will continue to receive the recommendation. The security policy settings will still be applied from the management level and the recommendation will still be generated.
1. Open the **Security policy** page.
-1. From the **Default initiative** or **Your custom initiatives** sections, select the relevant initiative with the policy you want to enable.
+1. From the **Default initiative** section, select the relevant initiative.
-1. Open the **Parameters** section and search for the policy that invokes the recommendation that you want to disable.
+1. Search for the recommendation that you want to enable, either by the search bar or filters.
-1. From the dropdown list, change the value for the corresponding policy to **AuditIfNotExists** or **Enforce**.
+1. Select the ellipses menu, select **Manage effect and parameters**.
+
+1. From the effect section, select **Disabled**.
1. Select **Save**. > [!NOTE]
- > The change might take up to 12 hours to take effect.
-
+ > Setting will take effect immediately, but recommendations will update based on their freshness interval (up to 12 hours).
## Next steps This page explained security policies. For related information, see the following pages:
This page explained security policies. For related information, see the followin
- [Learn how to set policies using PowerShell](../governance/policy/assign-policy-powershell.md) - [Learn how to edit a security policy in Azure Policy](../governance/policy/tutorials/create-and-manage.md) - [Learn how to set a policy across subscriptions or on Management groups using Azure Policy](../governance/policy/overview.md)-- [Learn how to enable Defender for Cloud on all subscriptions in a management group](onboard-management-group.md)
+- [Learn how to enable Defender for Cloud on all subscriptions in a management group](onboard-management-group.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| [Deprecation of legacy compliance standards across cloud environments](#deprecation-of-legacy-compliance-standards-across-cloud-environments) | April 2023 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | April 2023 | | [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services) | April 2023 |
+| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 |
### Changes in the recommendation "Machines should be configured securely"
Learn how to [Customize the set of standards in your regulatory compliance dashb
| Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) | | Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | Based on policy: [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |
+### DevOps Resource Deduplication for Defender for DevOps
+
+**Estimated date for change: June 2023**
+
+To improve the Defender for DevOps user experience and enable further integration with Defender for Coud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant.
+
+If you do not have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they do not want to keep by navigating to Defender for Cloud Environment Settings.
+
+Customers will have until June 30, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps.
+ ## Next steps
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Unfortunately, this change came with an unavoidable breaking change. The breakin
1. Navigate to the logic app that is connected to the policy. 1. Select **Logic app designer**. 1. Select the **three dot** > **Rename**.
-1. Rename the Defender for cloud connector as follows:
+1. Rename the Defender for Cloud connector as follows:
| Original name | New name| |--|--|
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
# Microsoft Defender for IoT alerts
-Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. Alerts are messages that a Defender for IoT engine triggers when OT or Enterprise IoT network sensors detect changes or suspicious activity in network traffic that needs your attention.
+Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. Alerts are triggered when OT or Enterprise IoT network sensors detect changes or suspicious activity in network traffic that needs your attention.
For example:
Use the following table to learn more about each alert status and triage option.
|**Active** | - Azure portal only | Set an alert to *Active* to indicate that an investigation is underway, but that the alert can't yet be closed or otherwise triaged. <br><br>This status has no effect elsewhere in Defender for IoT. | |**Closed** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console | Close an alert to indicate that it's fully investigated, and you want to be alerted again the next time the same traffic is detected.<br><br>Closing an alert adds it to the sensor event timeline.<br><br>On the on-premises management console, *New* alerts are called *Acknowledged*. | |**Learn** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console <br><br>*Unlearning* an alert is available only on the OT sensor. | Learn an alert when you want to close it and add it as allowed traffic, so that you aren't alerted again the next time the same traffic is detected. <br><br>For example, when the sensor detects firmware version changes following standard maintenance procedures, or when a new, expected device is added to the network. <br><br>Learning an alert closes the alert and adds an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating other OT sensor reports. <br><br>Learning alerts is available for selected alerts only, mostly those triggered by *Policy* and *Anomaly* engine alerts. |
-|**Mute** | - OT network sensors <br><br>- On-premises management console <br><br>*Unmuting* an alert is available only on the OT sensor. | Mute an alert when you want to close it and not see again for the same traffic, but without adding the alert allowed traffic. <br><br>For example, when the Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode may indicate that the PLC isn't secure, but after investigation, it's determined that the new mode is acceptable. <br><br>Muting an alert closes it, but doesn't add an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when when calculating data for other sensor reports. <br><br>Muting an alert is available for selected alerts only, mostly those triggered by the *Anomaly*, *Protocol Violation*, or *Operational* engines. |
+|**Mute** | - OT network sensors <br><br>- On-premises management console <br><br>*Unmuting* an alert is available only on the OT sensor. | Mute an alert when you want to close it and not see again for the same traffic, but without adding the alert allowed traffic. <br><br>For example, when the Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode may indicate that the PLC isn't secure, but after investigation, it's determined that the new mode is acceptable. <br><br>Muting an alert closes it, but doesn't add an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating data for other sensor reports. <br><br>Muting an alert is available for selected alerts only, mostly those triggered by the *Anomaly*, *Protocol Violation*, or *Operational* engines. |
> [!TIP] > If you know ahead of time which events are irrelevant for you, such as during a maintenance window, or if you don't want to track the event in the event timeline, create an alert exclusion rule on an on-premises management console instead.
Use the following table to learn more about each alert status and triage option.
> For more information, see [Create alert exclusion rules on an on-premises management console](how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console). >
+### Triage OT alerts during learning mode
+
+*Learning mode* refers to the initial period after an OT sensor is deployed, when your OT sensor learns your network's baseline activity, including the devices and protocols in your network, and the regular file transfers that occur between specific devices.
+
+Use learning mode to perform an initial triage on the alerts in your network, *learning* those you want to mark as authorized, expected activity. Learned traffic doesn't generate new alerts the next time the same traffic is detected.
+
+For more information, see [Create a learned baseline of OT alerts](ot-deploy/create-learned-baseline.md).
+ ## Next steps Review alert types and messages to help you understand and plan remediation actions and playbook integrations. For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md).
defender-for-iot Management Integration Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/api/management-integration-apis.md
This API returns data about a specific device per a given device ID.
| **u_mac_address_objects** |JSON array of MAC addresses | Not nullable | Array of [MAC address](#mac_address_object-fields) objects | | **u_protocol_objects** | JSON array of protocols | Not nullable | An array of [protocol](#protocol_object-fields) objects | | **u_vlans** |JSON array of VLAN objects | Not nullable | An array of [VLAN](#vlan_object-fields) objects |
-| **u_purdue_layer** | String | Not nullable |Defines the default [Purdue layer](../plan-network-monitoring.md#purdue-reference-model-and-defender-for-iot) for this device type. |
+| **u_purdue_layer** | String | Not nullable |Defines the default [Purdue layer](../best-practices/understand-network-architecture.md) for this device type. |
| **u_sensor_ids** |JSON array of sensor ID objects |Not nullable | An array of [sensor ID](#sensor_id_object-fields) objects | | **u_cm_device_url** |String |Not nullable | The URL used to access the device on the on-premises management console. | | **u_device_urls** |JSON array of URL objects |Not nullable | An array of [device URL](#device_url_object-fields) objects |
curl -k -H "Authorization: <Authorization token>" "https://<IP Address>/external
```rest curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" "https://127.0.0.1/external/v3/integration/devicecves/1664781014000" ```+ ## Next steps
defender-for-iot Dell Edge 5200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-5200.md
This article describes the Dell Edge 5200 appliance for OT sensors.
|**Hardware profile** | E500| |**Performance** | Max bandwidth: 1 Gbps<br>Max devices: 10,000 | |**Physical specifications** | Mounting: Wall Mount<br>Ports: 3x RJ45 |
-|**Status** | Supported, Not available pre-configured|
+|**Status** | Supported, Not available preconfigured|
The following image shows the hardware elements on the Dell Edge 5200 that are used by Defender for IoT:
The following image shows the hardware elements on the Dell Edge 5200 that are u
|Processor| Intel® Core™ i7-9700TE| |Chipset|Intel C246| |Memory |32 GB = Two 16 GB DDR4 ECC UDIMM|
-|Storage| 1x 512GB SSD |
+|Storage| 1x 512 GB SSD |
|Network controller|3x Intel GbE: 2x i210 + i219LM PHY| |Management|Intel AMT supported on i5 and i7 CPUs| |Device access| 6x USB 3.0|
The following image shows the hardware elements on the Dell Edge 5200 that are u
|Quantity|PN|Description| |:-|:-|:-|
-|1|210-BCNV|Dell EMC Edge Gateway 5200,Core i7-9700TE.32G.512G, Win 10 IoT.TPM, OEM|
+|1|210-BCNV|Dell EMC Edge Gateway 5200, Core i7-9700TE.32G.512G, Win 10 IoT.TPM, OEM|
|1|631-ADIJ|User Documentation EMEA 2| |1|683-1187|No Installation Service Selected (Contact Sales Rep for more details)| |1|709-BDGW|Parts Only Warranty 15 Months|
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Dell Poweredge R350 E1800 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r350-e1800.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Hpe Proliant Dl20 Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-legacy.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
The following image shows a sample of the HPE ProLiant DL20 back panel:
|-||-| |1| P44111-B21 | HPE DL20 Gen10+ 4SFF CTO Server| |1| P45252-B21 | Intel Xeon E-2334 FIO CPU for HPE|
-|4| P28610-B21 | HPE 1 TB SATA 7.2K SFF BC HDD|
+|4| P28610-B21 | HPE 1-TB SATA 7.2K SFF BC HDD|
|2| P43019-B21 | HPE 16 GB 1Rx8 PC4-3200AA-E Standard Kit| |1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10)|
-|1| P21106-B21 | INT I350 1GbE 4p BASE-T Adapter|
+|1| P21106-B21 | INT I350 1 GbE 4p BASE-T Adapter|
|1| P45948-B21 | HPE DL20 Gen10+ RPS FIO Enable Kit| |2| 865408-B21 | HPE 500W FS Plat Hot Plug LH Power Supply Kit| |1| 775612-B21 | HPE 1U Short Friction Rail Kit|
Optional modules for port expansion include:
|Location |Type|Specifications| |--|--|| | PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
-| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10 Gb 2-port SFP+ Adapter for HPE |
-| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1 Gb 4-port BASE-T Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10-Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1-Gb 4-port BASE-T Adapter for HPE |
| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25 Gb 2-port SFP28 Adapter for HPE |
-| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10 Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10-Gb 2-port SFP+ Adapter for HPE |
| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver| | SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
The following image shows a sample of the HPE ProLiant DL20 Gen10 back panel:
|-||-| |1| P44111-B21 | HPE DL20 Gen10+ NHP 2LFF CTO Server| |1| P45252-B21 | Intel Xeon E-2334 FIO CPU for HPE|
-|2| P28610-B21 | HPE 1 TB SATA 7.2K SFF BC HDD|
+|2| P28610-B21 | HPE 1-TB SATA 7.2K SFF BC HDD|
|1| P43016-B21 | HPE 8 GB 1Rx8 PC4-3200AA-E Standard Kit| |1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10)| |1| P21106-B21 | INT I350 1GbE 4p BASE-T Adapter|
Optional modules for port expansion include:
|Location |Type|Specifications| |--|--||
-| PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25 Gb 2-port SFP28 Adapter for HPE |
-| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10 Gb 2-port SFP+ Adapter for HPE |
-| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1 Gb 4-port BASE-T Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25-Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10-Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1-Gb 4-port BASE-T Adapter for HPE |
| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25 Gb 2-port SFP28 Adapter for HPE | | PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10 Gb 2-port SFP+ Adapter for HPE | | SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver|
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Hpe Proliant Dl20 Smb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-smb-legacy.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Virtual Management Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-hyper-v.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install Microsoft Defender for IoT on-premises management console software](../ot-deploy/install-software-on-premises-management-console.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Virtual Management Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-management-vmware.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install Microsoft Defender for IoT on-premises management console software](../ot-deploy/install-software-on-premises-management-console.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install OT monitoring software on OT sensors](../ot-deploy/install-software-ot-sensor.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Ys Techsystems Ys Fit2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/ys-techsystems-ys-fit2.md
This article describes the **YS-techsystems YS-FIT2** appliance deployment and i
| Appliance characteristic |Details | ||| |**Hardware profile** | L100|
-|**Performance** | Max bandwidth: 10Mbps<br>Max devices: 100|
+|**Performance** | Max bandwidth: 10 Mbps<br>Max devices: 100|
|**Physical specifications** | Mounting: DIN/VESA<br>Ports: 2x RJ45| |**Status** | Supported; Available as pre-configured |
Continue understanding system requirements for physical or virtual appliances. F
Then, use any of the following procedures to continue: -- [Purchase sensors or download software for sensors](../onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](../how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](../how-to-install-software.md)
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../ot-deploy/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
Title: OT sensor cloud connection methods - Microsoft Defender for IoT
+ Title: Methods for connecting sensors to Azure - Microsoft Defender for IoT
description: Learn about the architecture models available for connecting your sensors to Microsoft Defender for IoT. Last updated 02/23/2023
-# OT sensor cloud connection methods
+# Methods for connecting sensors to Azure
-This article describes the architectures and methods supported for connecting your Microsoft Defender for IoT OT sensors to the Azure portal in the cloud.
-OT network sensors connect to Azure to provide data about detected devices, alerts, and sensor health, to access threat intelligence packages, and more. For example, connected Azure services include IoT Hub, Blob Storage, Event Hubs, Aria, the Microsoft Download Center.
+This article is one in a series of articles describing the [deployment path](ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
-The cloud connection methods described in this article are supported only for OT sensor version 22.x and later. All methods provide:
+Use the content below to learn about the architectures and methods supported for connecting Defender for IoT sensors to the Azure portal in the cloud.
-- **Improved security**, without additional security configurations. [Connect to Azure using specific and secure endpoints](how-to-set-up-your-network.md#sensor-access-to-azure-portal), without the need for any wildcards. -- **Encryption**, Transport Layer Security (TLS1.2/AES-256) provides encrypted communication between the sensor and Azure resources.
+Network sensors connect to Azure to provide data about detected devices, alerts, and sensor health, to access threat intelligence packages, and more. For example, connected Azure services include IoT Hub, Blob Storage, Event Hubs, Aria, the Microsoft Download Center.
-- **Scalability** for new features supported only in the cloud
+All connection methods provide:
-For more information, see [Choose a sensor connection method](connect-sensors.md#choose-a-sensor-connection-method) and [Download endpoint details](how-to-manage-sensors-on-the-cloud.md#endpoint).
+- **Improved security**, without additional security configurations. [Connect to Azure using specific and secure endpoints](networking-requirements.md#sensor-access-to-azure-portal), without the need for any wildcards.
+
+- **Encryption**, Transport Layer Security (TLS1.2/AES-256) provides encrypted communication between the sensor and Azure resources.
+- **Scalability** for new features supported only in the cloud
> [!IMPORTANT] > To ensure that your network is ready, we recommend that you first run your connections in a lab or testing environment so that you can safely validate your Azure service configurations. >
+## Choose a sensor connection method
+
+Use this section to help determine which connection method is right for your cloud-connected Defender for IoT sensor.
+
+|If ... |... Then use |
+|||
+|- You require private connectivity between your sensor and Azure, <br>- Your site is connected to Azure via ExpressRoute, or <br>- Your site is connected to Azure over a VPN | **[Proxy connections with an Azure proxy](#proxy-connections-with-an-azure-proxy)** |
+|- Your sensor needs a proxy to reach from the OT network to the cloud, or <br>- You want multiple sensors to connect to Azure through a single point | **[Proxy connections with proxy chaining](#proxy-connections-with-proxy-chaining)** |
+|- You want to connect your sensor to Azure directly | **[Direct connections](#direct-connections)** |
+|- You have sensors hosted in multiple public clouds | **[Multicloud connections](#multicloud-connections)** |
+
+> [!NOTE]
+> While most connection methods are relevant for OT sensors only, [Direct connections](#direct-connections) are also used for [Enterprise IoT sensors](eiot-sensor.md).
+ ## Proxy connections with an Azure proxy The following image shows how you can connect your sensors to the Defender for IoT portal in Azure through a proxy in the Azure VNET. This configuration ensures confidentiality for all communications between your sensor and Azure.
The following image shows how you can connect your sensors to the Defender for I
:::image type="content" source="media/architecture-connections/direct.png" alt-text="Diagram of a direct connection to Azure." border="false":::
-With direct connections
+With direct connections:
- Any sensors connected to Azure data centers directly over the internet have a secure and encrypted connection to the Azure data centers. Transport Layer Security (TLS1.2/AES-256) provides *always-on* communication between the sensor and Azure resources. - The sensor initiates all connections to the Azure portal. Initiating connections only from the sensor protects internal network devices from unsolicited inbound connections, but also means that you don't need to configure any inbound firewall rules.
-For more information, see [Connect directly](connect-sensors.md#connect-directly).
+For more information, see [Provision sensors for cloud management](ot-deploy/provision-cloud-management.md).
## Multicloud connections
Depending on your environment configuration, you might connect using one of the
For more information, see [Connect via multicloud vendors](connect-sensors.md#connect-via-multicloud-vendors).
-## Working with a mixture of sensor software versions
-
-If you're a customer with an existing production deployment, we recommend that upgrade any legacy sensor versions to version 22.1.x.
-
-While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
-
-After migrating, you can remove any relevant IoT Hubs from your subscription as they'll no longer be required for your sensor connections.
-
-For more information, see [Update OT system software](update-ot-software.md) and [Migration for existing customers](connect-sensors.md#migration-for-existing-customers).
- ## Next steps
-For more information, see [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
-
+> [!div class="step-by-step"]
+> [« Plan your OT monitoring system](best-practices/plan-corporate-monitoring.md)
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Title: System architecture for OT monitoring - Microsoft Defender for IoT
+ Title: System architecture for OT/IoT monitoring - Microsoft Defender for IoT
description: Learn about the Microsoft Defender for IoT system architecture and data flow.- Previously updated : 12/25/2022+ Last updated : 01/18/2023
-# System architecture for OT system monitoring
+# Microsoft Defender for IoT components
The Microsoft Defender for IoT system is built to provide broad coverage and visibility from diverse data sources.
Defender for IoT connects to both cloud and on-premises components, and is built
Defender for IoT includes the following OT security monitoring components: - **The Azure portal**, for cloud management and integration to other Microsoft services, such as Microsoft Sentinel.-- **OT network sensors**, to detect OT devices across your network. OT network sensors are deployed on either a virtual machine or a physical appliance, and configured as cloud-connected sensors, or fully on-premises, locally managed sensors.-- **An on-premises management console** for centralized OT site management in local, air-gapped environments.
-## What is a Defender for IoT committed device?
+- **Operational technology (OT) or Enterprise IoT network sensors**, to detect devices across your network. Defender for IoT network sensors are deployed on either a virtual machine or a physical appliance. OT sensors can be configured as cloud-connected sensors, or fully on-premises, locally managed sensors.
+- **An on-premises management console** for centralized OT sensor management and monitoring for local, air-gapped environments.
-## OT network sensors
+## OT and Enterprise IoT network sensors
-OT network sensors discover and continuously monitor network traffic across your OT devices.
+Defender for IoT network sensors discover and continuously monitor network traffic across your network devices.
-- Network sensors are purpose-built for OT networks and connect to a SPAN port or network TAP. OT network sensors can provide visibility into risks within minutes of connecting to the network.
+- Network sensors are purpose-built for OT/IoT networks and connect to a SPAN port or network TAP. Defender for IoT network sensors can provide visibility into risks within minutes of connecting to the network.
-- Network sensors use OT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect threats, such as fileless malware, based on anomalous or unauthorized activity.
+- Network sensors use OT/IoT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect threats, such as fileless malware, based on anomalous or unauthorized activity.
Data collection, processing, analysis, and alerting takes place directly on the sensor, which can be ideal for locations with low bandwidth or high-latency connectivity. Only telemetry and insights are transferred on for management, either to the Azure portal or an on-premises management console.
-For more information, see [Onboard OT sensors to Defender for IoT](onboard-sensors.md).
+For more information, see [Defender for IoT OT deployment path](ot-deploy/ot-deploy-path.md).
### Cloud-connected vs. local OT sensors Cloud-connected sensors are sensors that are connected to Defender for IoT in Azure, and differ from locally managed sensors as follows:
-When you have a cloud connected OT network sensor:
+**When you have a cloud connected OT network sensor**:
- All data that the sensor detects is displayed in the sensor console, but alert information is also delivered to Azure, where it can be analyzed and shared with other Azure services.
When you have a cloud connected OT network sensor:
- The sensor name defined during onboarding is the name displayed in the sensor, and is read-only from the sensor console.
-In contrast, when working with locally managed sensors:
+**In contrast, when working with locally managed sensors**:
- View any data for a specific sensor from the sensor console. For a unified view of all information detected by several sensors, use an on-premises management console.
For example, the **policy violation detection** engine models industry control s
Since many detection algorithms were built for IT, rather than OT networks, the extra baseline for ICS networks helps to shorten the system's learning curve for new detections.
-Defender for IoT network sensors include the following analytics engines:
-
-|Name |Description |
-|||
-|**Protocol violation detection engine** | Identifies the use of packet structures and field values that violate ICS protocol specifications. <br><br>For example, Modbus exceptions or the initiation of an obsolete function code alerts. |
-|**Industrial malware detection engine** | Identifies behaviors that indicate the presence of known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton. |
-|**Anomaly detection engine** | Detects unusual machine-to-machine (M2M) communications and behaviors. <br><br>This engine models ICS networks and therefore requires a shorter learning period than analytics developed for IT. Anomalies are detected faster, with minimal false positives. <br><br>For example, Excessive SMB sign-in attempts, and PLC Scan Detected alerts. |
-|**Operational incident detection** | Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. <br><br> For example, the device might be disconnected (unresponsive), or the Siemens S7 stop PLC command was sent alerts. |
+Defender for IoT network sensors include the following main analytics engines:
+|Name |Description | Examples |
+||||
+|**Protocol violation detection engine** | Identifies the use of packet structures and field values that violate ICS protocol specifications. <br><br>Protocol violations occur when the packet structure or field values don't comply with the protocol specification.| An *"Illegal MODBUS Operation (Function Code Zero)"* alert indicates that a primary device sent a request with function code 0 to a secondary device. This action isn't allowed according to the protocol specification, and the secondary device might not handle the input correctly |
+| **Policy Violation** | A policy violation occurs with a deviation from baseline behavior defined in learned or configured settings. | An *"Unauthorized HTTP User Agent"* alert indicates that an application that wasn't learned or approved by policy is used as an HTTP client on a device. This might be a new web browser or application on that device.|
+|**Industrial malware detection engine** | Identifies behaviors that indicate the presence of malicious network activity via known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton. | A *"Suspicion of Malicious Activity (Stuxnet)"* alert indicates that the sensor detected suspicious network activity known to be related to the Stuxnet malware. This malware is an advanced persistent threat aimed at industrial control and SCADA networks. |
+|**Anomaly detection engine** | Detects unusual machine-to-machine (M2M) communications and behaviors. <br><br>This engine models ICS networks and therefore requires a shorter learning period than analytics developed for IT. Anomalies are detected faster, with minimal false positives. | A *"Periodic Behavior in Communication Channel"* alert reflects periodic and cyclic behavior of data transmission, which is common in industrial networks. <br>Other examples include excessive SMB sign-in attempts, and PLC scan detected alerts. |
+|**Operational incident detection** | Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. | A *"Device is Suspected to be Disconnected (Unresponsive)"* alert is triggered when a device isn't responding to any kind of request for a predefined period. This alert might indicate a device shutdown, disconnection, or malfunction. <br>Another example might be if the Siemens S7 stop PLC command was sent alerts. |
## Management options
Defender for IoT provides hybrid network support using the following management
:::image type="content" source="media/release-notes/new-interface.png" alt-text="Screenshot that shows the updated interface." lightbox="media/release-notes/new-interface.png"::: -- **The on-premises management console**. In air-gapped environments, the on-premises management console provides a centralized view and management options for devices and threats detected by connected OT network sensors. The on-premises management console also lets you organize your network into separate sites and zones to support a [Zero Trust](/security/zero-trust/) mindset, and provides extra maintenance tools and reporting features.
+- **The on-premises management console**. In air-gapped environments, you can get a central view of data from all of your sensors from an on-premises management console, using extra maintenance tools and reporting features.
-## Next steps
+ The software version on your on-premises management console must be equal to that of your most up-to-date sensor version. Each on-premises management console version is backwards compatible to older, supported sensor versions, but cannot connect to newer sensor versions.
-> [!div class="nextstepaction"]
-> [Understand OT sensor connection methods](architecture-connections.md)
+ For more information, see [Air-gapped OT sensor management deployment path](ot-deploy/air-gapped-deploy.md).
-> [!div class="nextstepaction"]
-> [Connect OT sensors to Microsoft Defender for IoT](connect-sensors.md)
+## What is a Defender for IoT committed device?
++
+## Next steps
-> [!div class="nextstepaction"]
-> [Frequently asked questions](resources-frequently-asked-questions.md)
+> [!div class="step-by-step"]
+> [Understand your network architecture ┬╗](architecture.md)
defender-for-iot Back Up Restore Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/back-up-restore-management.md
+
+ Title: Back up and restore the on-premises management console - Microsoft Defender for IoT
+description: Learn how to back up and restore the Microsoft Defender for IoT on-premises management console.
Last updated : 03/09/2023+++
+# Back up and restore the on-premises management console
+
+Back up and restore your on-premises management console to help protect against hard drive failures and data loss. In this article, learn how to:
+
+- Define backup and restore settings
+- Run an unscheduled backup via CLI
+- Use an SMB server to save your backup files to an external server
+- Restore the on-premises management console from the latest backup via CLI
+
+## Define backup and restore settings
+
+The on-premises management console is automatically backed up daily to the `/var/cyberx/backups` directory. Backup files do *not* include PCAP or log files, which must be manually backed up if needed.
+
+We recommend that you configure your on-premises management console to automatically transfer backup files to your own, internal network.
+
+> [!NOTE]
+> Backup files can be used to restore an on-premises management console only if the on-premises management console's current software version is the same as the version in the backup file.
+
+## Start an immediate, unscheduled backup via CLI
+
+You may want to create a manual backup file, such as just after updating your OT sensor software.
+
+To run a manual backup from the CLI:
+
+1. Sign into the on-premises management console as a privileged user via SSH/Telnet.
+
+1. Run:
+
+ ```bash
+ sudo cyberx-management-backup -full
+ ```
+
+## Save your backup file to an external server (SMB)
+
+We recommend saving your on-premises management console sensor backup files on your internal network. To do this, you may want to use an SMB server. For example:
+
+1. Create a shared folder on the external SMB server, and make sure that you have the folder's path and the credentials required to access the SMB server.
+
+1. Sign into your on-premises management console via SFTP and create a directory for your backup files. Run:
+
+ ```bash
+ sudo mkdir /<backup_folder_name_on_ server>
+ sudo chmod 777 /<backup_folder_name_on_c_server>/
+ ```
+
+1. Edit the `fstab` file with details about your backup folder. Run:
+
+ ```bash
+ sudo nano /etc/fstab
+
+ add - //<server_IP>/<folder_path> /<backup_folder_name_on_server> cifs rw,credentials=/etc/samba/user,vers=3.0,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0
+ ```
+
+1. Edit and create credentials to share for the SMB server. Run:
+
+ ```bash
+ sudo nano /etc/samba/user
+ ```
+
+1. Add your credentials as follows:
+
+ ```bash
+ username=<user name>
+ password=<password>
+ ```
+
+1. Mount the backup directory. Run:
+
+ ```bash
+ sudo mount -a
+ ```
+
+1. Configure your backup directory on the SMB server to use the shared file on the OT sensor. Run:
+
+ ```bash
+ sudo nano /var/cyberx/properties/backup.properties`
+ ```
+
+ Set the `backup_directory_path` to the folder on your OT sensor where you want to save your backup files.
+
+## Restore from the latest backup via CLI
+
+To restore your OT sensor from the latest backup file via CLI:
+
+1. Sign into the on-premises management console as a privileged user via SSH/Telnet.
+
+1. Run:
+
+ ```bash
+ $ sudo cyberx-management-system-restore
+ ```
+
+## Next steps
+
+[Maintain the on-premises management console](how-to-manage-the-on-premises-management-console.md)
defender-for-iot Back Up Restore Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/back-up-restore-sensor.md
+
+ Title: Back up and restore OT network sensors from the sensor console - Microsoft Defender for IoT
+description: Learn how to back up and restore Microsoft Defender for IoT OT network sensors from the sensor console.
Last updated : 03/09/2023+++
+# Back up and restore OT network sensors from the sensor console
+
+OT sensor data can be backed up and restored from the sensor console to help protect against hard drive failures and data loss. In this article, learn how to:
+
+- Set up automatic backup files from the sensor console GUI or via CLI
+- Back up files manually via sensor console GUI and CLI
+- Use an SMB server to save your backup file to an external server
+- Restore an OT sensor from the GUI or via CLI
+
+## Set up backup and restore files
+
+OT sensors are automatically backed up daily at 3:00 AM, including configuration and detected data. Backup files do *not* include PCAP or log files, which must be manually backed up if needed.
+
+We recommend that you configure your system to automatically transfer backup files to your own internal network, or an [on-premises management console](back-up-sensors-from-management.md).
+
+For more information, see [On-premises backup file capacity](references-data-retention.md#on-premises-backup-file-capacity).
+
+> [!NOTE]
+> Backup files can be used to restore an OT sensor only if the OT sensor's current software version is the same as the version in the backup file.
+
+### Turn on backup functionality
+
+If your OT sensor is configured *not* to run automatic backups, you can turn this back on manually in the `/var/cyberx/properties/backup.properties` file on the OT sensor machine.
+
+## Create a manual backup from the GUI
+
+You may want to create a manual backup file, such as just after updating your OT sensor software.
+
+**To create a manual backup file from the GUI**:
+
+1. Sign into the OT sensor GUI and select **System settings** > **Sensor management** > **Health and troubleshooting** > **Backup & restore**.
+
+1. In the **Backup & restore pane**:
+
+ - Enter a meaningful filename for your backup file.
+ - Select the content you want to back up.
+ - Select **Export**.
+
+Your new backup file is listed in the **Archived files** area of the backup pane.
+
+> [!NOTE]
+> Backup files can be used to restore data, but can't be opened without the one-time password (OTP) provided and assistance from Microsoft support. Open a support ticket if you need to open a backup file.
+
+## Start an immediate, unscheduled backup via CLI
+
+You may want to create a manual backup file, such as just after updating your OT sensor software.
+
+To run a manual backup from the CLI, use the `cyberx-xsense-system-backup` CLI command.
+
+For more information, see the [OT sensor CLI reference](cli-ot-sensor.md#start-an-immediate-unscheduled-backup).
+
+## Save your backup to an external server (SMB)
+
+We recommend saving your OT sensor backup files on your internal network. To do this, you may want to use an SMB server. For example:
+
+1. Create a shared folder on the external SMB server, and make sure that you have the folder's path and the credentials required to access the SMB server.
+
+1. Sign into your OT sensor via SFTP and create a directory for your backup files. Run:
+
+ ```bash
+ sudo mkdir /<backup_folder_name>
+
+ sudo chmod 777 /<backup_folder_name>/
+ ```
+
+1. Edit the `fstab` file with details about your backup folder. Run:
+
+ ```bash
+ sudo nano /etc/fstab
+
+ add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifsrw,credentials=/etc/samba/user,vers=X.X,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0
+ ```
+
+1. Edit and create credentials to share for the SMB server. Run:
+
+ ```bash
+ sudo nano /etc/samba/user
+ ```
+
+1. Add your credentials as follows:
+
+ ```bash
+ username=<user name>
+ password=<password>
+ ```
+
+1. Mount the backup directory. Run:
+
+ ```bash
+ sudo mount -a
+ ```
+
+1. Configure your backup directory on the SMB server to use the shared file on the OT sensor. Run:
+
+ ```bash
+ sudo nano /var/cyberx/properties/backup.properties`
+ ```
+
+ Set the `backup_directory_path` to the folder on your OT sensor where you want to save your backup files.
+
+## Restore an OT sensor from the GUI
+
+1. Sign into the OT sensor via SFTP and download the backup file you want to use to a location accessible from the OT sensor GUI.
+
+ Backup files are saved on your OT sensor machine, at `/var/cyberx/backups`, and are named using the following syntax: `<sensor name>-backup-version-<version>-<date>.tar`.
+
+ For example: `Sensor_1-backup-version-2.6.0.102-2019-06-24_09:24:55.tar`
+
+ > [!IMPORTANT]
+ > Make sure that the backup file you select uses the same OT sensor software version that's currently installed on your OT sensor.
+
+1. Sign into the OT sensor GUI and select **System settings** > **Sensor management** > **Health and troubleshooting** > **Backup & restore** > **Restore**.
+
+1. Select **Browse** to select your downloaded backup file. The sensor will start to restore from the selected backup file.
+
+1. When the restore process is complete, select **Close**.
+
+## Restore an OT sensor from the latest backup via CLI
+
+To restore your OT sensor from the latest backup file via CLI:
+
+1. Make sure that your backup file has the same OT sensor software version as the current software version on the OT sensor.
+
+1. Use the `cyberx-xsense-system-restore` CLI command to restore your OT sensor.
+
+For more information, see the [OT sensor CLI reference](cli-ot-sensor.md#start-an-immediate-unscheduled-backup).
+
+## Next steps
+
+[Maintain OT network sensors from the GUI](how-to-manage-individual-sensors.md)
+
+[Backup OT network sensors from the on-premises management console](back-up-sensors-from-management.md)
defender-for-iot Back Up Sensors From Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/back-up-sensors-from-management.md
+
+ Title: Back up OT network sensors from the on-premises management console - Microsoft Defender for IoT
+description: Learn how to back up Microsoft Defender for IoT OT network sensors from the on-premises management console.
Last updated : 03/09/2023+++
+# Back up OT network sensors from the on-premises management console
+
+Back up your OT network sensors from the on-premises management console to help protect against hard drive failures and data loss. In this article, learn how to:
+
+- Manage the sensor backup files
+- Configure backup settings
+- Run an unscheduled backup
+- View backup notifications
+- Use an SMB server to save your backup files to an external server
+
+## Manage OT sensor backup files
+
+Define OT sensor backup schedules from your on-premises management console to streamline settings across your system and store backup files on your on-premises management console.
+
+The on-premises management console can store up to nine backup files for each connected OT sensor, provided that the backup files don't exceed the allocated backup space.
+
+Backup files are copied from the OT sensor to the on-premises management console over an encrypted channel.
+
+For more information, see [Set up backup and restore files](back-up-restore-sensor.md#set-up-backup-and-restore-files).
+
+## Configure OT sensor backup settings
+
+1. Sign into your on-premises management console and go to **System Settings**. In the **Management console general configuration** area, select **Schedule Sensor Backups**.
+
+1. In the **Sensor Backup Schedule** dialog, toggle on the **Collect Backups** option so that it reads **Collect Backups On**.
+
+1. Enter scheduling details for your backup, using a 24-hour clock in the time value. For example, to schedule a backup at 6:00 PM, enter **18:00**.
+
+1. Enter the number of GB you want to allocate for backup storage. When the configured limit is exceeded, the oldest backup file is deleted.
+
+ **If you're storing backup files on the on-premises management console**, supported values are defined based on your [hardware profiles](ot-appliance-sizing.md). For example:
+
+ |Hardware profile |Backup storage availability |
+ |||
+ |**E1800** |Default storage is 40 GB; limit is 100 GB. |
+ |**L500** | Default storage is 20 GB; limit is 50 GB. |
+ |**L100** | Default storage is 10 GB; limit is 25 GB. |
+ |**L60** | Default storage is 10 GB; limit is 25 GB. |
+
+ **If you're storing backup files on an external server**, there's no maximum storage. However, keep in mind:
+
+ - If the allocated storage space is exceeded, the OT sensor isn't backed up.
+ - The on-premises management console will still attempt to retrieve backup files for managed OT sensors. If the backup file for one OT sensor exceeds the configured storage, the on-premises management console skips it and continues on to the next OT sensor's backup file.
+
+1. Enter the number of backup files you want to retain on each OT sensor.
+
+1. To define a custom path for your backup storage on the on-premises management console, toggle on the **Custom Path** option and enter the path to your backup storage. For example, you may want to save your backup files on an external server.
+
+ Supported characters include alphanumeric characters, forward slashes (**/**), and underscores (**_**).
+
+ Make sure that the location you enter is accessible by the on-premises management console.
+
+ By default, backup files are stored on your on-premises management console at `/var/cyberx/sensor-backups`.
+
+1. Select **SAVE** to save your changes.
+
+## Run an immediate, unscheduled backup
+
+1. Sign into your on-premises management console and go to **System Settings**. In the **Management console general configuration** area, select **Schedule Sensor Backups**.
+
+1. Locate the OT sensor you want to back up and select **Back up Now**.
+
+1. Select **CLOSE** to close the dialog.
+
+## View backup notifications
+
+To check for backup notifications on the on-premises management console:
+
+1. Sign into your on-premises management console and go to **System Settings**. In the **Management console general configuration** area, select **Schedule Sensor Backups**.
+
+1. In the **Sensor Backup Schedule** dialog, check for details about recent backup activities for each sensor listed. For example:
+
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/sensor-location.png" alt-text="View your sensors and where they're located and all relevant information.":::
+
+Backup failures might occur for any of the following scenarios:
+
+- A backup file can't be found or retrieved.
+- Network connection failures.
+- There isn't enough storage space allocated on the on-premises management console for the backup file.
+
+> [!TIP]
+> You may want to send alerts about backup notifications to partner services.
+>
+> To do this, [create a forwarding alert rule](how-to-forward-alert-information-to-partners.md#create-forwarding-rules-on-an-on-premises-management-console) on your on-premises management console. In the **Create Forwarding Rule** dialog box, make sure to select **Report System Notifications**.
+
+## Save your backup file to an external server (SMB)
+
+We recommend saving your OT sensor backup files on your internal network. To do this, you may want to use an SMB server. For example:
+
+1. Create a shared folder on the external SMB server, and make sure that you have the folder's path and the credentials required to access the SMB server.
+
+1. Sign into your OT sensor via SFTP and create a directory for your backup files. Run:
+
+ ```bash
+ sudo mkdir /<backup_folder_name_on_server>
+
+ sudo chmod 777 /<backup_folder_name_on_server>/
+ ```
+
+1. Edit the `fstab` file with details about your backup folder. Run:
+
+ ```bash
+ sudo nano /etc/fstab
+
+ add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifs rw,credentials=/etc/samba/user,vers=3.0,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0
+ ```
+
+1. Edit and create credentials to share for the SMB server. Run:
+
+ ```bash
+ sudo nano /etc/samba/user
+ ```
+
+1. Add your credentials as follows:
+
+ ```bash
+ username=<user name>
+
+ password=<password>
+ ```
+
+1. Mount the backup directory. Run:
+
+ ```bash
+ sudo mount -a
+ ```
+
+1. Configure your backup directory on the SMB server to use the shared file on the OT sensor. Run:
+
+ ```bash
+ sudo nano /var/cyberx/properties/backup.properties
+ ```
+
+ Set the `backup_directory_path` to the folder on your on-premises management console where you want to save your backup files.
+
+## Next steps
+
+For more information, see:
+
+[Manage sensors from the on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
+
+[Back up and restore OT network sensors from the sensor console](back-up-restore-sensor.md)
defender-for-iot Certificate Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/certificate-requirements.md
+
+ Title: SSL/TLS certificate file requirements - Microsoft Defender for IoT
+description: Learn about requirements for SSL/TLS certificates used with Microsoft Defender for IOT OT sensors and on-premises management consoles.
Last updated : 01/17/2023+++
+# SSL/TLS certificate requirements for on-premises resources
+
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
+
+Use the content below to learn about the requirements for [creating SSL/TLS certificates](../ot-deploy/create-ssl-certificates.md) for use with Microsoft Defender for IoT appliances.
++
+Defender for IoT uses SSL/TLS certificates to secure communication between the following system components:
+
+- Between users and the OT sensor or on-premises management console UI access
+- Between OT sensors and an on-premises management console, including [API communication](../references-work-with-defender-for-iot-apis.md)
+- Between an on-premises management console and a high availability (HA) server, if configured
+- Between OT sensors or on-premises management consoles and partners servers defined in [alert forwarding rules](../how-to-forward-alert-information-to-partners.md)
+
+Some organizations also validate their certificates against a Certificate Revocation List (CRL) and the certificate expiration date, and the certificate trust chain. Invalid certificates can't be uploaded to OT sensors or on-premises management consoles, and will block encrypted communication between Defender for IoT components.
+
+> [!IMPORTANT]
+> You must create a unique certificate for each OT sensor, on-premises management console, and high availability server, where each certificate meets required criteria.
+
+## Supported file types
+
+When preparing SSL/TLS certificates for use with Microsoft Defender for IoT, make sure to create the following file types:
+
+| File type | Description |
+|||
+| **.crt – certificate container file** | A `.pem`, or `.der` file, with a different extension for support in Windows Explorer.|
+| **.key – Private key file** | A key file is in the same format as a `.pem` file, with a different extension for support in Windows Explorer.|
+| **.pem – certificate container file (optional)** | Optional. A text file with a Base64-encoding of the certificate text, and a plain-text header and footer to mark the beginning and end of the certificate. |
+
+## CRT file requirements
+
+Make sure that your certificates include the following CRT parameter details:
+
+| Field | Requirement |
+|||
+| **Signature Algorithm** | SHA256RSA |
+| **Signature Hash Algorithm** | SHA256 |
+| **Valid from** | A valid past date |
+| **Valid To** | A valid future date |
+| **Public Key** | RSA 2048 bits (Minimum) or 4096 bits |
+| **CRL Distribution Point** | URL to a CRL server. If your organization doesn't [validate certificates against a CRL server](../ot-deploy/create-ssl-certificates.md#verify-crl-server-access), remove this line from the certificate. |
+| **Subject CN (Common Name)** | domain name of the appliance, such as *sensor.contoso.com*, or *.contosocom* |
+| **Subject (C)ountry** | Certificate country code, such as `US` |
+| **Subject (OU) Org Unit** | The organization's unit name, such as *Contoso Labs* |
+| **Subject (O)rganization** | The organization's name, such as *Contoso Inc.* |
+
+> [!IMPORTANT]
+> While certificates with other parameters might work, they aren't supported by Defender for IoT. Additionally, wildcard SSL certificates, which are public key certificates that can beused on multiple subdomains such as *.contoso.com*, are insecure and aren't supported.
+> Each appliance must use a unique CN.
+
+## Key file requirements
+
+Make sure that your certificate key files use either RSA 2048 bits or 4096 bits. Using a key length of 4096 bits slows down the SSL handshake at the start of each connection, and increases the CPU usage during handshakes.
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Plan your OT monitoring system](plan-corporate-monitoring.md)
defender-for-iot Plan Corporate Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/plan-corporate-monitoring.md
+
+ Title: Plan your OT monitoring system with Defender for IoT
+description: Learn how to plan your overall OT network monitoring structure and requirements.
+ Last updated : 02/16/2023++
+# Plan your OT monitoring system with Defender for IoT
+
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
+
+Use the content below to learn how to plan your overall OT monitoring with Microsoft Defender for IoT, including the sites you're going to monitor, your user groups and types, and more.
++
+## Prerequisites
+
+Before you start planning your OT monitoring deployment, make sure that you have an Azure subscription and an OT plan onboarded Defender for IoT. For more information, see [Add an OT plan to your Azure subscription](../getting-started.md).
+
+This step is performed by your architecture teams.
+
+## Plan OT sites and zones
+
+When working with OT networks, we recommend that you list all of the locations where your organization has resources connected to a network, and then segment those locations out into *sites* and *zones*.
+
+Each physical location can have its own site, which is further segmented into zones. You'll associate each OT network sensor with a specific site and zone, so that each sensor covers only a specific area of your network.
+
+Using sites and zones supports the principles of [Zero Trust](/security/zero-trust/), and provides extra monitoring and reporting granularity.
+
+For example, if your growing company has factories and offices in Paris, Lagos, Dubai, and Tianjin, you might segment your network as follows:
+
+|Site |Zones |
+|||
+|**Paris office** | - Ground floor (Guests) <br>- Floor 1 (Sales) <br>- Floor 2 (Executive) |
+|**Lagos office** | - Ground floor (Offices) <br>- Floors 1-2 (Factory) |
+|**Dubai office** | - Ground floor (Convention center) <br>- Floor 1 (Sales)<br>- Floor 2 (Offices) |
+|**Tianjin office** | - Ground floor (Offices) <br>- Floors 1-2 (Factory) |
+
+If you don't plan any detailed sites and zones, Defender for IoT still uses a default site and zone to assign to all OT sensors.
+
+For more information, see [Zero Trust and your OT networks](../concept-zero-trust.md).
+
+### Separating zones for recurring IP ranges
+
+Each zone can support multiple sensors, and if you're deploying Defender for IoT at scale, each sensor might detect different aspects of the same device. Defender for IoT automatically consolidates devices that are detected in the same zone, with the same logical combination of device characteristics, such the same IP and MAC address.
+
+If you're working with multiple networks and have unique devices with similar characteristics, such as recurring IP address ranges, assign each sensor to a separate zone so that Defender for IoT knows to differentiate between the devices and identifies each device uniquely.
+
+For example, your network might look like the following image, with six network segments logically allocated across two Defender for IoT sites and zones. Note that this image shows two network segments with the same IP addresses from different production lines.
++
+In this case, we recommend separating **Site 2** into two separate zones, so that devices in the segments with recurring IP addresses aren't consolidated incorrectly, and are identified as separate and unique devices in the device inventory.
+
+For example:
++
+## Plan your users
+
+Understand who in your organization will be using Defender for IoT, and what their use cases are. While your security operations center (SOC) and IT personnel will be the most common users, you may have others in your organization who will need read-access to resources in Azure or on local resources.
+
+- **In Azure**, user assignments are based on their Azure Active Directory and RBAC roles. If you're segmenting your network into multiple sites, decide which permissions you'll want to apply per site.
+
+- **OT network sensors** support both local users and Active Directory synchronizations. If you'll be using Active Directory, make sure that you have the access details for the Active Directory server.
+
+For more information, see:
+
+- [Microsoft Defender for IoT user management](../manage-users-overview.md)
+- [Azure user roles and permissions for Defender for IoT](../roles-azure.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md)
+
+## Plan OT sensor and management connections
+
+For cloud-connected sensors, determine how you'll be connecting each OT sensor to Defender for IoT in the Azure cloud, such as what sort of proxy you might need. For more information, see [Methods for connecting sensors to Azure](../architecture-connections.md).
+
+If you're working in an air-gapped or hybrid environment and will have multiple, locally-managed OT network sensors, you may want to plan to deploy an on-premises management console to configure your settings and view data from a central location. For more information, see the [Air-gapped OT sensor management deployment path](../ot-deploy/air-gapped-deploy.md).
+
+## Plan on-premises SSL/TLS certifications
+
+We recommend using a [CA-signed SSL/TLS certificate](../ot-deploy/create-ssl-certificates.md) with your production system to ensure your appliances' ongoing security.
+
+Plan which certificates and which certificate authority (CA) you'll use for each OT sensor, what tools you'll use to generate the certificates, and which attributes you'll include in each certificate.
+
+For more information, see [SSL/TLS certificate requirements for on-premises resources](certificate-requirements.md).
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [Plan and prepare for deploying a Defender for IoT site ┬╗](plan-prepare-deploy.md)
defender-for-iot Plan Prepare Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/plan-prepare-deploy.md
+
+ Title: Prepare an OT site deployment - Microsoft Defender for IoT
+description: Learn how to prepare for an OT site deployment, including understanding how many OT sensors you'll need, where they should be placed, and how they'll be managed.
+ Last updated : 02/16/2023++
+# Prepare an OT site deployment
+
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
++
+To fully monitor your network, you'll need visibility on all of the endpoint devices in your network. Microsoft Defender for IoT mirrors the traffic that moves through your network devices to Defender for IoT network sensors. [OT network sensors](../architecture.md) then analyze your traffic data, trigger alerts, generate recommendations, and send data to Defender for IoT in Azure.
+
+This article helps you plan where to place OT sensors in your network so that the traffic you want to monitor is mirrored as required, and how to prepare your site for sensor deployment.
+
+## Prerequisites
+
+Before planning OT monitoring for a specific site, make sure you've [planned your overall OT monitoring system](plan-corporate-monitoring.md).
+
+This step is performed by your architecture teams.
+
+### Learn about Defender for IoT's monitoring architecture
+
+Use the following articles to understand more about the components and architecture in your network and Defender for IoT system:
+
+- [Microsoft Defender for IoT components](../architecture.md)
+- [Understanding your OT network architecture](understand-network-architecture.md)
+
+## Create a network diagram
+
+Each organizationΓÇÖs network will have its own complexity. Create a network map diagram that thoroughly lists all the devices in your network so that you can identify the traffic you want to monitor.
+
+While creating your network diagram, use the following questions to identify and make notes about the different elements in your network and how they communicate.
+
+### General questions
+
+- What are your overall monitoring goals?
+
+- Do you have any redundant networks, and are there areas of your network map that don't need monitoring and you can disregard?
+
+- Where are your network's security and operational risks?
+
+### Network questions
+
+- Which protocols are active on monitored networks?
+
+- Are VLANs configured in the network design?
+
+- Is there any routing in the monitored networks?
+
+- Is there any serial communication in the network?
+
+- Where are firewalls installed in the networks you want to monitor?
+
+- Is there traffic between an industrial control (ICS) network and an enterprise, business network? If so, is this traffic monitored?
+
+- What's the physical distance between your switches and the enterprise firewall?
+
+- Is OT system maintenance done with fixed or transient devices?
+
+### Switch questions
+
+- If a switch is otherwise unmanaged, can you monitor the traffic from a higher-level switch? For example, if your OT architecture uses a [ring topology](sample-connectivity-models.md#sample-ring-topology), only one switch in the ring needs monitoring.
+
+- Can unmanaged switches be replaced with managed switches, or is the use of network TAPs an option?
+
+- Can you monitor the switch's VLAN, or is the VLAN visible in another switch that you can monitor?
+
+- If you connect a network sensor to the switch, will it mirror the communication between the HMI and PLCs?
+
+- If you want to connect a network sensor to the switch, is there physical rack space available in the switch's cabinet?
+
+- What's the cost/benefit of monitoring each switch?
+
+## Identify the devices and subnets you want to monitor
+
+The traffic you want to monitor and mirror to Defender for IoT network sensors is the traffic that's most [interesting](understand-network-architecture.md#identifying-interesting-traffic-points) to you from a security or operational perspective.
+
+Review your OT network diagram together with your site engineers to define where you'll find the most relevant traffic for monitoring. We recommend that you meet with both network and operational teams to clarify expectations.
+
+Together with your team, create a table of devices you want to monitor with the following details:
+
+|Specification |Description |
+|||
+| **Vendor** | The device's manufacturing vendor |
+|**Device name** | A meaningful name for ongoing use and reference |
+|**Type** | The device type, such as: *Switch*, *Router*, *Firewall*, *Access Point*, and so on |
+|**Network layer** |The devices you'll want to monitor are either L2 or L3 devices:<br> - *L2 devices* are devices within the IP segment<br>- *L3 devices* are devices outside of the IP segment<br> <br>Devices that support both layers can be considered as L3 devices. |
+|**Crossing VLANs** | The IDs of any VLANs that cross the device. For example, verify these VLAN IDs by checking the spanning tree operation mode on each VLAN to see if they cross an associated port. |
+|**Gateway for** | The VLANs for which the device acts as a default gateway. |
+| **Network details** | The device's IP address, subnet, D-GW, and DNS host |
+| **Protocols** | Protocols used on the device. Compare your protocols against Defender for IoT's [list of protocols](../concept-supported-protocols.md) supported out-of-the-box. |
+| **Supported traffic mirroring** | Define what sort of traffic mirroring is supported by each device, such as SPAN, RSPAN, ERSPAN, or TAP. <br><br>Use this information to [choose traffic mirroring methods for your OT sensors](traffic-mirroring-methods.md). |
+| **Managed by partner services?** | Describe if a partner service, such as Siemens, Rockwell, or Emerson, manages the device. If relevant, describe the management policy.|
+| **Serial connections** | If the device communicates via a serial connection, specify the serial communication protocol. |
+
+### Plan a multi-sensor deployment
+
+If you're planning on deploying multiple network sensors, also consider the following recommendations when deciding where to place your sensors:
+
+- **Physically connected switches**: For switches that are physically connected by Ethernet cable, make sure to plan at least one sensor for every 80 meters of distance between switches.
+
+- **Multiple networks without physical connectivity**: If you have multiple networks without any physical connectivity between them, plan for at least one sensor for each individual network
+
+- **Switches with RSPAN support**: If you have switches that can use [RSPAN traffic mirroring](../traffic-mirroring/configure-mirror-rspan.md), plan at least one sensor for every eight switches, with a local SPAN port. Plan to place the sensor close enough to the switches so that you can connect them by cable.
+
+### Create a list of subnets
+
+Create an aggregated list of subnets that you want to monitor, based on the list of devices you want to monitor across your entire network.
+
+After deploying your sensors, you'll use this list to verify that the listed subnets are detected automatically, and manually update the list as needed.
+
+## List your planned OT sensors
+
+After you understand the traffic you want to mirror to Defender for IoT, create a full list of all the OT sensors you'll be onboarding.
+
+For each sensor, list:
+
+- Whether the sensor will be a [cloud-connected or locally-managed sensor](../architecture.md#cloud-connected-vs-local-ot-sensors)
+
+- For cloud-connected sensors, the [cloud connection method](../architecture-connections.md) you'll be using.
+
+- Whether you'll be using physical or virtual appliances for your sensors, considering the bandwidth that you'll need for quality of service (QoS). For more information, see [Which appliances do I need?](../ot-appliance-sizing.md)
+
+- The [site and zone](plan-corporate-monitoring.md#plan-ot-sites-and-zones) you'll be assigning to each sensor.
+
+ Data ingested from sensors in the same site or zone can be viewed together, segmented out from other data in your system. If there's sensor data that you want to view grouped together in the same site or zone, make sure to assign sensor sites and zones accordingly.
+
+- The [traffic mirroring method](traffic-mirroring-methods.md) you'll use for each sensor
+
+As your network expands in time, you can onboard more sensors, or modify your existing sensor definitions.
+
+> [!IMPORTANT]
+> We recommend checking the characteristics of the devices you expect each sensor to detect, such as IP and MAC addresses. Devices that are detected in the same zone with the same logical set of device characteristics are automatically consolidated and are identified as the same device.
+>
+> For example, if you're working with multiple networks and recurring IP addresses, make sure that you plan your each sensor with a different zone so that devices are identified correctly as separate and unique devices.
+>
+> For more information, see [Separating zones for recurring IP ranges](plan-corporate-monitoring.md#separating-zones-for-recurring-ip-ranges).
+>
+
+## Prepare on-premises appliances
+
+- **If you're using virtual appliances**, ensure that you have the relevant resources configured. For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+- **If you're using physical appliances**, ensure that you have the required hardware. You can buy [pre-configured appliances](../ot-pre-configured-appliances.md), or plan to [install software](../ot-deploy/install-software-ot-sensor.md) on your own appliances.
+
+ To buy pre-configured appliances:
+
+ 1. Go to Defender for IoT in the Azure portal.
+ 1. Select **Getting started** > **Sensor** > **Buy preconfigured appliance** > **Contact**.
+
+ The link opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D)with a template request for Defender for IoT appliances.
+
+For more information, see [Which appliances do I need?](../ot-appliance-sizing.md)
+
+### Prepare ancillary hardware
+
+If you're using physical appliances, make sure that you have the following extra hardware available for each physical appliance:
+
+- A monitor and keyboard
+- Rack space
+- AC power
+- A LAN cable to connect the appliance's management port to the network switch
+- LAN cables for connecting mirror (SPAN) ports and network terminal access points (TAPs) to your appliance
+
+### Prepare appliance network details
+
+When you have your appliances ready, make a list of the following details for each appliance:
+
+- IP address
+- Subnet
+- Default gateway
+- Host name
+- DNS server (optional), with the DNS server IP address and host name
+
+## Prepare a deployment workstation
+
+Prepare a workstation from where you can run Defender for IoT deployment activities. The workstation can be a Windows or Mac machine, with the following requirements:
+
+- Terminal software, such as PuTTY
+
+- A supported browser for connecting to sensor consoles and the Azure portal. For more information, see [recommended browsers for the Azure portal](../../../azure-portal/azure-portal-supported-browsers-devices.md#recommended-browsers).
+
+- Required firewall rules configured, with access open for required interfaces. For more information, see [Networking requirements](../networking-requirements.md).
+
+## Prepare CA-signed certificates
+
+We recommend using CA-signed certificates in production deployments.
+
+Make sure that you understand the [SSL/TLS certificate requirements for on-premises resources](certificate-requirements.md). If you want to deploy a CA-signed certificate during initial deployment, make sure to have the certificate prepared.
+
+If you decide to deploy with the built-in, self-signed certificate, we recommend that you deploy a CA-signed certificate in production environments later on.
+
+For more information, see:
+
+- [Create SSL/TLS certificates for OT appliances](../ot-deploy/create-ssl-certificates.md)
+- [Manage SSL/TLS certificates](../how-to-manage-individual-sensors.md#manage-ssltls-certificates)
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Plan your OT monitoring system](plan-corporate-monitoring.md)
+
+> [!div class="step-by-step"]
+> [Onboard OT sensors to Defender for IoT ┬╗](../onboard-sensors.md)
defender-for-iot Sample Connectivity Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/sample-connectivity-models.md
Title: Sample OT network connectivity models - Microsoft Defender for IoT description: This article describes sample connectivity methods for Microsoft Defender for IoT OT sensor connections. Last updated 11/08/2022-+ # Sample OT network connectivity models
defender-for-iot Traffic Mirroring Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/traffic-mirroring-methods.md
Title: Traffic mirroring methods - Microsoft Defender for IoT
+ Title: Choose a traffic mirroring methods - Microsoft Defender for IoT
description: This article describes traffic mirroring methods for OT monitoring with Microsoft Defender for IoT. Last updated 09/20/2022-+
-# Traffic mirroring methods for OT monitoring
+# Choose a traffic mirroring method for OT sensors
-This article introduces the supported traffic mirroring methods for OT monitoring with Microsoft Defender for IoT.
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes the supported traffic mirroring methods for OT monitoring with Microsoft Defender for IoT.
++
+The decision as to which traffic mirroring method to use depends on your network configuration and the needs of your organization.
To ensure that Defender for IoT only analyzes the traffic that you want to monitor, we recommend that you configure traffic mirroring on a switch or a terminal access point (TAP) that includes only industrial ICS and SCADA traffic.
To ensure that Defender for IoT only analyzes the traffic that you want to monit
> SPAN and RSPAN are Cisco terminology. Other brands of switches have similar functionality but might use different terminology. >
-## Supported mirroring methods
+## Mirroring port scope recommendations
-The decision as to which traffic mirroring method to use depends on your network configuration and the needs of your organization.
+We recommend configuring your traffic mirroring from all of your switch's ports, even if no data is connected to them. If you don't, rogue devices can later be connected to an unmonitored port, and those devices won't be detected by the Defender for IoT network sensors.
+
+For OT networks that use broadcast or multicast messaging, configure traffic mirroring only for RX (*Receive*) transmissions. Multicast messages will be repeated for any relevant active ports, and you'll be using more bandwidth unnecessarily.
+
+## Compare supported traffic mirroring methods
Defender for IoT supports the following methods:
-|Method |Description |
-|||
-|[A switch SPAN port](../traffic-mirroring/configure-mirror-span.md) | Mirrors local traffic from interfaces on the switch to a different interface on the same switch |
-|[Remote SPAN (RSPAN) port](../traffic-mirroring/configure-mirror-rspan.md) | Mirrors traffic from multiple, distributed source ports into a dedicated remote VLAN |
-|[An encapsulated remote switched port analyzer (ERSPAN)](../traffic-mirroring/configure-mirror-erspan.md) | Mirrors input interfaces to your OT sensor's monitoring interface |
-|[Active or passive aggregation (TAP)](../traffic-mirroring/configure-mirror-tap.md) | Installs an active / passive aggregation TAP inline to your network cable, which duplicates traffic to the OT network sensor. Best method for forensic monitoring. |
-|[An ESXi vSwitch](../traffic-mirroring/configure-mirror-esxi.md) | Mirrors traffic using *Promiscuous mode* on an ESXi vSwitch. |
-|[A Hyper-V vSwitch](../traffic-mirroring/configure-mirror-hyper-v.md) | Mirrors traffic using *Promiscuous mode* on a Hyper-V vSwitch. |
+|Method |Description | More information |
+||||
+|**A switch SPAN port** | Mirrors local traffic from interfaces on the switch to a different interface on the same switch | [Configure mirroring with a switch SPAN port](../traffic-mirroring/configure-mirror-span.md) |
+|**Remote SPAN (RSPAN) port** | Mirrors traffic from multiple, distributed source ports into a dedicated remote VLAN | [Remote SPAN (RSPAN) ports](#remote-span-rspan-ports) <br><br>[Configure traffic mirroring with a Remote SPAN (RSPAN) port](../traffic-mirroring/configure-mirror-rspan.md) |
+|**Active or passive aggregation (TAP)** | Installs an active / passive aggregation TAP inline to your network cable, which duplicates traffic to the OT network sensor. Best method for forensic monitoring. | [Active or passive aggregation (TAP)](#active-or-passive-aggregation-tap) |
+|**An encapsulated remote switched port analyzer (ERSPAN)** | Mirrors input interfaces to your OT sensor's monitoring interface | [ERSPAN ports](#erspan-ports) <br><br> [Configure traffic mirroring with an encapsulated remote switched port analyzer (ERSPAN)](../traffic-mirroring/configure-mirror-erspan.md). |
+|**An ESXi vSwitch** | Mirrors traffic using *Promiscuous mode* on an ESXi vSwitch. | [Traffic mirroring with virtual switches](#traffic-mirroring-with-virtual-switches) <br><br>[Configure traffic mirroring with a ESXi vSwitch](../traffic-mirroring/configure-mirror-esxi.md). |
+|**A Hyper-V vSwitch** | Mirrors traffic using *Promiscuous mode* on a Hyper-V vSwitch. | [Traffic mirroring with virtual switches](#traffic-mirroring-with-virtual-switches) <br><br>[Configure traffic mirroring with a Hyper-V vSwitch](../traffic-mirroring/configure-mirror-hyper-v.md) |
-## Mirroring port scope recommendations
+## Remote SPAN (RSPAN) ports
-We recommend configuring your traffic mirroring from all of your switch's ports, even if no data is connected to them. If you don't, rogue devices can later be connected to an unmonitored port, and those devices won't be detected by the Defender for IoT network sensors.
+Configure a remote SPAN (RSPAN) session on your switch to mirror traffic from multiple, distributed source ports into a dedicated remote VLAN.
-For OT networks that use broadcast or multicast messaging, configure traffic mirroring only for RX (*Recieve*) transmissions. Multicast messages will be repeated for any relevant active ports, and you'll be using more bandwidth unnecessarily.
+Data in the VLAN is then delivered through trunked ports, across multiple switches to a specified switch that contains the physical destination port. Connect the destination port to your OT network sensor to monitor traffic with Defender for IoT.
-## Next steps
+The following diagram shows an example of a remote VLAN architecture:
++
+For more information, see [Configure traffic mirroring with a Remote SPAN (RSPAN) port](../traffic-mirroring/configure-mirror-rspan.md).
+
+## Active or passive aggregation (TAP)
+
+When using active or passive aggregation to mirror traffic, an active or passive aggregation terminal access point (TAP) is installed inline to the network cable. The TAP duplicates both *Receive* and *Transmit* traffic to the OT network sensor so that you can monitor the traffic with Defender for IoT.
+
+A TAP is a hardware device that allows network traffic to flow back and forth between ports without interruption. The TAP creates an exact copy of both sides of the traffic flow, continuously, without compromising network integrity.
+
+For example:
++
+Some TAPs aggregate both *Receive* and *Transmit*, depending on the switch configuration. If your switch doesn't support aggregation, each TAP uses two ports on your OT network sensor to monitor both *Receive* and *Transmit* traffic.
+
+### Advantages of mirroring traffic with a TAP
+
+We recommend TAPs especially when traffic mirroring for forensic purposes. Advantages of mirroring traffic with TAPs include:
+
+- TAPs are hardware-based and can't be compromised
+
+- TAPs pass all traffic, even damaged messages that are often dropped by the switches
+
+- TAPs aren't processor-sensitive, which means that packet timing is exact. In contrast, switches handle mirroring functionality as a low-priority task, which can affect the timing of the mirrored packets.
+
+You can also use a TAP aggregator to monitor your traffic ports. However, TAP aggregators aren't processor-based, and aren't as intrinsically secure as hardware TAPs. TAP aggregators may not reflect exact packet timing.
+
+### Common TAP models
+
+The following TAP models have been tested for compatibility with Defender for IoT. Other vendors and models might also be compatible.
+
+- **Garland P1GCCAS**
+
+ When using a Garland TAP, make sure to set up your network to support aggregation. For more information, see the **Tap Aggregation** diagram under the **Network Diagrams** tab in the [Garland installation guide](https://www.garlandtechnology.com/products/aggregator-tap-copper).
+
+- **IXIA TPA2-CU3**
+
+ When using an Ixia TAP, make sure **Aggregation mode** is active. For more information, see the [Ixia install guide](https://support.ixiacom.com/sites/default/files/resources/install-guide/c_taps_zd-copper_qig_0303.pdf).
+
+- **US Robotics USR 4503**
+
+ When using a US Robotics TAP, make sure to toggle the aggregation mode on by setting the selectable switch to **AGG**. For more information, see the [US Robotics installation guide](https://www.usr.com/files/9814/7819/2756/4503-ig.pdf).
+
+## ERSPAN ports
+
+Use an encapsulated remote switched port analyzer (ERSPAN) to mirror input interfaces over an IP network to your OT sensor's monitoring interface, when securing remote networks with Defender for IoT.
+
+The sensor's monitoring interface is a promiscuous interface and doesn't have a specifically allocated IP address. When ERSPAN support is configured, traffic payloads that are ERSPAN encapsulated with GRE tunnel encapsulation will be analyzed by the sensor.
+
+Use ERSPAN encapsulation when there's a need to extend monitored traffic across Layer 3 domains. ERSPAN is a Cisco proprietary feature and is available only on specific routers and switches. For more information, see the [Cisco documentation](https://learningnetwork.cisco.com/s/article/span-rspan-erspan).
+
+> [!NOTE]
+> This article provides high-level guidance for configuring traffic mirroring with ERSPAN. Specific implementation details will vary depending on your equiptment vendor.
+>
+
+### ERSPAN architecture
+
+ERSPAN sessions include a source session and a destination session configured on different switches. Between the source and destination switches, traffic is encapsulated in GRE, and can be routed over layer 3 networks.
+
+For example:
++
+ERSPAN transports mirrored traffic over an IP network using the following process:
+
+1. A source router encapsulates the traffic and sends the packet over the network.
+1. At the destination router, the packet is de-capsulated and sent to the destination interface.
+
+ERSPAN source options include elements such as:
+
+- Ethernet ports and port channels
+- VLANs; all supported interfaces in the VLAN are ERSPAN sources
+- Fabric port channels
+- Satellite ports and host interface port channels
+
+For more information, see [Configure traffic mirroring with an encapsulated remote switched port analyzer (ERSPAN)](../traffic-mirroring/configure-mirror-erspan.md).
+
+## Traffic mirroring with virtual switches
+
+While a virtual switch doesn't have mirroring capabilities, you can use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a monitoring port, similar to a [SPAN port](../traffic-mirroring/configure-mirror-span.md). A SPAN port on your switch mirrors local traffic from interfaces on the switch to a different interface on the same switch.
+
+Connect the destination switch to your OT network sensor to monitor traffic with Defender for IoT.
+
+*Promiscuous mode* is a mode of operation and a security, monitoring, and administration technique that is defined at the virtual switch or portgroup level. When promiscuous mode is used, any of the virtual machineΓÇÖs network interfaces in the same portgroup can view all network traffic that goes through that virtual switch. By default, promiscuous mode is turned off.
For more information, see: -- [Prepare your OT network for Microsoft Defender for IoT](../how-to-set-up-your-network.md)-- [Sample OT network connectivity models](sample-connectivity-models.md)
+- [Configure traffic mirroring with a ESXi vSwitch](../traffic-mirroring/configure-mirror-esxi.md)
+- [Configure traffic mirroring with a Hyper-V vSwitch](../traffic-mirroring/configure-mirror-hyper-v.md)
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Prepare an OT site deployment](plan-prepare-deploy.md)
defender-for-iot Understand Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/understand-network-architecture.md
Title: Understand your OT network architecture - Microsoft Defender for IoT
+ Title: Microsoft Defender for IoT and your network architecture - Microsoft Defender for IoT
description: Describes the Purdue reference module in relation to Microsoft Defender for IoT to help you understand more about your own OT network architecture. Last updated 06/02/2022
-# Understand your OT network architecture
+# Defender for IoT and your network architecture
-When planning your network monitoring, you must understand your system network architecture and how it will need to connect to Defender for IoT. Also, understand where each of your system elements falls in the Purdue Reference model for Industrial Control System (ICS) OT network segmentation.
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
-Defender for IoT network sensors receive traffic from two main sources, either by switch mirror ports (SPAN ports) or network TAPs. The network sensor's management port connects to the business, corporate, or sensor management network for network management from the Azure portal or an on-premises management system.
+Use the content below to learn about your own operational technology (OT)/IoT network architecture, and where each of your system elements falls into layers of OT network segmentation.
-For example:
+## OT/IoT networking layers
+
+Your organizationΓÇÖs network consists of many device types, which can be divided into the following main groups:
+
+- **Endpoint devices**. Can include multiple sub-groups, such as servers, computers, IoT (internet of things) devices, and so on.
+- **Network devices**. Serve the infrastructure with networking services, and can include network switches, firewalls, routers, and access points.
-## Purdue reference model and Defender for IoT
+Most networks environments are designed with hierarchical model of three layers. For example:
-The Purdue Reference Model is a model for Industrial Control System (ICS)/OT network segmentation that defines six layers, components and relevant security controls for those networks.
+|Layer |Description |
+|||
+|**Access** | The access layer is where most endpoints will be located. These endpoints are typically served by a default gateway and routing from the upper layers, and often, routing from the distribution layer. <br><br>* A *default gateway* is a network service or entity within the local subnet that is responsible for routing traffic outside of the LAN, or IP *segment*. |
+|**Distribution** | The distribution layer is responsible for aggregating multiple access layers and delivering communication to the core layer with services like VLAN routing, quality of service, network policies, and so on. |
+|**Core** | The core layer contains the organization's main server farm, and provides high-speed, low-latency services via the distribution layer. |
+
+## The Purdue model of networking architecture
-Each device type in your OT network falls in a specific level of the Purdue model. The following image shows how devices in your network spread across the Purdue model and connect to Defender for IoT services.
+The Purdue Reference Model for Industrial Control System (ICS)/OT network segmentation defines a further six layers, with specific components and relevant security controls for each layer.
+Each device type in your OT network falls in a specific level of the Purdue model, for example, as shown in the following image:
-The following table describes each level of the Purdue model when applied to Defender for IoT devices:
+
+The following table describes each level of the Purdue model when applied to devices you may have in your network:
|Name |Description | ||| |**Level 0**: Cell and area | Level 0 consists of a wide variety of sensors, actuators, and devices involved in the basic manufacturing process. These devices perform the basic functions of the industrial automation and control system, such as: <br><br>- Driving a motor<br>- Measuring variables<br>- Setting an output<br>- Performing key functions, such as painting, welding, and bending | | **Level 1**: Process control | Level 1 consists of embedded controllers that control and manipulate the manufacturing process whose key function is to communicate with the Level 0 devices. In discrete manufacturing, those devices are programmable logic controllers (PLCs) or remote telemetry units (RTUs). In process manufacturing, the basic controller is called a distributed control system (DCS). | |**Level 2**: Supervisory | Level 2 represents the systems and functions associated with the runtime supervision and operation of an area of a production facility. These usually include the following: <br><br>- Operator interfaces or human-machine interfaces (HMIs) <br>- Alarms or alerting systems <br> - Process historian and batch management systems <br>- Control room workstations <br><br>These systems communicate with the PLCs and RTUs in Level 1. In some cases, they communicate or share data with the site or enterprise (Level 4 and Level 5) systems and applications. These systems are primarily based on standard computing equipment and operating systems (Unix or Microsoft Windows). |
-|**Levels 3 and 3.5**: Site-level and industrial perimeter network | The site level represents the highest level of industrial automation and control systems. The systems and applications that exist at this level manage site-wide industrial automation and control functions. Levels 0 through 3 are considered critical to site operations. The systems and functions that exist at this level might include the following: <br><br>- Production reporting (for example, cycle times, quality index, predictive maintenance) <br>- Plant historian <br>- Detailed production scheduling<br>- Site-level operations management <br>-0 Device and material management <br>- Patch launch server <br>- File server <br>- Industrial domain, Active Directory, terminal server <br><br>These systems communicate with the production zone and share data with the enterprise (Level 4 and Level 5) systems and applications. |
+|**Levels 3 and 3.5**: Site-level and industrial perimeter network | The site level represents the highest level of industrial automation and control systems. The systems and applications that exist at this level manage site-wide industrial automation and control functions. Levels 0 through 3 are considered critical to site operations. The systems and functions that exist at this level might include the following: <br><br>- Production reporting (for example, cycle times, quality index, predictive maintenance) <br>- Plant historian <br>- Detailed production scheduling<br>- Site-level operations management <br>- Device and material management <br>- Patch launch server <br>- File server <br>- Industrial domain, Active Directory, terminal server <br><br>These systems communicate with the production zone and share data with the enterprise (Level 4 and Level 5) systems and applications. |
|**Levels 4 and 5**: Business and enterprise networks | Level 4 and Level 5 represent the site or enterprise network where the centralized IT systems and functions exist. The IT organization directly manages the services, systems, and applications at these levels. |
-## Next steps
+## Placing OT sensors in your network
+
+When Defender for IoT network sensors are connected to your network infrastructure, they receive mirrored traffic, such as from switch mirror (SPAN) ports or network TAPs. The sensor's management port connects to the business, corporate, or sensor management network, such as for network management from the Azure portal.
+
+For example:
++
+The following image adds Defender for IoT resources to the same network as described [earlier](#the-purdue-model-of-networking-architecture), including a SPAN port, network sensor, and Defender for IoT in the Azure portal.
++
+For more information, see [Sample OT network connectivity models](sample-connectivity-models.md).
-After you've understood your own OT network architecture, learn more about how to plan your Defender for IoT deployment in your network. Continue with [Plan your sensor connections](plan-network-monitoring.md).
+## Identifying interesting traffic points
-For more information, see:
+Typically, interesting points from a security perspective are the interfaces that connect between the default gateway entity to the core or distribution switch.
+
+Identifying these interfaces as interesting points ensures that traffic traveling from inside the IP segment to outside the IP segment is monitored. Make sure to also consider *missing* traffic, which is traffic that was originally destined to leave the segment, but ends up remaining inside the segment. For more information, see [Traffic flows in your network](#traffic-flows-in-your-network).
+
+When planning a Defender for IoT deployment, we recommend considering the following elements in your network:
+
+|Consideration |Description |
+|||
+|**Unique traffic types inside a segment** | Especially consider the following types of traffic inside a network segment:<br><br> **Broadcast / Multicast traffic**: Traffic sent to any entity within the subnet. <br><br>With Internet Group Management Protocol (IGMP), snooping is configured within your network, but there's no guarantee that multicast traffic is forwarded to any specific entity. <br><br>Broadcast and multicast traffic is typically sent to all entities in the local IP subnet, including the default gateway entity, and is therefore also covered and monitored.<br><br>**Unicast traffic**: Traffic forwarded directly to the destination, without crossing the entire subnet endpoints, including the default gateway. <br><br>Monitor unicast traffic with Defender for IoT by placing sensors directly on the access switches. |
+|**Monitor both streams of traffic** | When streaming traffic to Defender for IoT, some vendors and products allow a direction stream, which can cause a gap in your data. <br><br>ItΓÇÖs very useful to monitor both directions of traffic to get network conversation information about your subnets and better accuracy in general.|
+|**Find a subnet's default gateway** | For each interesting subnet, the interesting point will be any connection to the entity that acts as the default gateway for the network subnet. <br><br>However, in some cases, there's traffic within the subnet that isnΓÇÖt monitored by the regular interesting point. Monitoring this type of traffic, which isnΓÇÖt otherwise monitored by the typical deployment, is useful especially on sensitive subnets. |
+|**Atypical traffic** | Monitoring traffic that isnΓÇÖt otherwise monitored by the typical deployment may require extra streaming points and network solutions, such as RSPAN, network tappers, and so on. <br><br>For more information, see [Traffic mirroring methods for OT monitoring](traffic-mirroring-methods.md). |
+
+### Sample traffic diagram
+
+The following diagram shows a sample network in a building of three floors, where the first and second floors house both endpoints and switches, and the third-floor houses endpoints and switches, but also firewalls, core switches, a server, and routers.
+
+- **Traffic traveling outside of the IP segment** is shown by a blue dotted line.
+
+- **Interesting traffic** is marked in red, and indicates the places where we'd recommend putting network sensors to stream that interesting traffic to Defender for IoT.
++
+## Traffic flows in your network
+
+Devices that trigger network traffic match the configured subnet mask and IP address with a destination IP address to understand what the trafficΓÇÖs destination should be. The traffic's destination will be either the default gateway or elsewhere inside the IP segment. This matching process can also trigger an ARP process to find the MAC address for the destination IP address.
+
+Based on the results of the matching process, devices track their network traffic as either traffic *within* or *outside* of the IP segment.
+
+|Traffic |Description |Example |
+||||
+|**Traffic outside of the IP segment** | When the traffic destination isn't found within the subnet maskΓÇÖs range, the endpoint device sends the traffic to the specific default gateway thatΓÇÖs responsible for routing traffic flow to other relevant segments. <br><br>Any traffic traveling outside of an IP segment flows through a default gateway to cross the network segment, as a first hop in the path to its destination. <br><br>**Note**: Placing a Defender for IoT OT network sensor at this point ensures that all traffic traveling outside of the segment is streamed to Defender for IoT, analyzed, and can be investigated. |- A PC is configured with an IP address of `10.52.2.201` and a subnet mask of `255.255.255.0`. <br><br>- The PC triggers a network flow to a web server with a destination IP address of `10.17.0.88`. <br><br>- The PCΓÇÖs operating system calculates the destination IP address with the range of IP addresses in the segment to determine if the traffic should be sent locally, inside the segment, or direct to the default gateway entity that can find the correct route to the destination.<br><br>- Based on the calculationΓÇÖs results, the operating system finds that for the IP and subnet peer (`10.52.2.17` and `255.255.255.0`), the segment range is `10.52.2.0` ΓÇô `10.52.2.255`. <br><br>**The results mean** that the web server is **not** within the same IP Segment as the PC, and the traffic should be sent to the default gateway. |
+|**Traffic within the IP segment** |If the device finds the destination IP address within the subnet mask range, the traffic doesnΓÇÖt cross the IP segment, and travels inside the segment to find the destination MAC address. <br><br>This traffic requires an ARP resolution, which triggers a broadcast packet to find the destination IP addressΓÇÖs MAC address. | - A PC is configured with an IP address of `10.52.2.17` and a subnet mask of `255.255.255.0`. <br><br> - This PC triggers a network flow to another PC, with a destination address of `10.52.2.131`. <br><br>- The PCΓÇÖs operating system calculates the destination IP address with the range of IP addresses in the segment to determine if the traffic should be sent locally, inside the segment, or direct to the default gateway entity that can find the correct route to the destination. <br><br>- Based on the calculationΓÇÖs results, the operating system finds that for the IP and subnet peer (`10.52.2.17` and `255.255.255.0`), the segment range is `10.52.2.0 ΓÇô 10.52.2.255`. <br><br>**The results** mean that the PCΓÇÖs destination IP address is within the same segment as the PC itself, and the traffic should be sent directly on the segment. |
+
+## Next steps
-- [Traffic mirroring methods for OT monitoring](traffic-mirroring-methods.md)-- [Sample OT network connectivity models](sample-connectivity-models.md)
+> [!div class="step-by-step"]
+> [« Prepare an OT site deployment](plan-prepare-deploy.md)
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
For more information, see [Access the CLI](../references-work-with-defender-for-
Use the following commands to verify that the Defender for IoT application on the OT sensor are working correctly, including the web console and traffic analysis processes.
-Health checks are also available from the OT sensor console. For more information, see [Troubleshoot the sensor and on-premises management console](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md).
+Health checks are also available from the OT sensor console. For more information, see [Troubleshoot the sensor](how-to-troubleshoot-sensor.md).
|User |Command |Full command syntax | ||||
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
Microsoft Sentinel is a scalable cloud service for security information event ma
In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
-Install the Defender for IoT data connector alone to stream your OT network alerts to Microsoft Sentinel. Then, also install the **Microsoft Defender for IoT** solution the extra value of IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS](https://collaborate.mitre.org/attackics/index.php/Overview).
+Install the Defender for IoT data connector alone to stream your OT network alerts to Microsoft Sentinel. Then, also install the **Microsoft Defender for IoT** solution the extra value of IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS techniques](https://attack.mitre.org/techniques/ics/).
### Integrated detection and response
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
This article lists the protocols that are supported by default in Microsoft Defe
## Supported protocols for OT device discovery
-Defender for IoT can detect the following protocols when identifying assets and devices in your network:
+OT network sensors can detect the following protocols when identifying assets and devices in your network:
|Brand / Vendor |Protocols | |||
Defender for IoT can detect the following protocols when identifying assets and
|**Toshiba** |Toshiba Computer Link | |**Yokogawa** | Centum ODEQ (Centum / ProSafe DCS)<br> HIS Equalize<br> FA-M3<br> Vnet/IP | + [!INCLUDE [active-monitoring-protocols](includes/active-monitoring-protocols.md)] ## Supported protocols for Enterprise IoT device discovery
defender-for-iot Configure Active Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-active-monitoring.md
For example, the following image highlights in grey the extra network access you
## Next steps
-Use one of the following procedures to configure active monitoring in your OT network:
+For more information, see:
- [Configure Windows Endpoint monitoring](configure-windows-endpoint-monitoring.md) - [Configure DNS servers for reverse lookup resolution for OT monitoring](configure-reverse-dns-lookup.md)-
-For more information, see:
--- [View your device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md)-- [View your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
defender-for-iot Configure Reverse Dns Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-reverse-dns-lookup.md
Use reverse DNS lookup to resolve host names or FQDNs associated with the IP add
All CIDR formats are supported.
+## Prerequisites
+
+Before performing the procedures in this article, you must have:
+
+- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](ot-deploy/activate-deploy-sensor.md).
+
+- Access to your OT network sensor as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+- Completed the prerequisites outlined in [Configure active monitoring for OT networks](configure-active-monitoring.md), and confirmed that active monitoring is right for your network.
++ ## Define DNS servers
-1. On your sensor console, select **System settings**> **Network monitoring** and under **Active Discovery**, select **Reverse DNS Lookup**.
+1. On your sensor console, select **System settings** > **Network monitoring** and under **Active Discovery**, select **Reverse DNS Lookup**.
1. Use the **Schedule Reverse Lookup** options to define your scan as in fixed intervals, per hour, or at a specific time.
All CIDR formats are supported.
- **DNS server address**, which is the DNS server IP address - **DNS server port** - **Number of labels**, which is the number of domain labels you want to display. To get this value, resolve the network IP address to device FQDNs. You can enter up to 30 characters in this field.
- - **Subnets**, which is the subnets that you want to the DNS server to query
+ - **Subnets**, which is the subnets that you want the DNS server to query
1. Toggle on the **Enabled** option at the top to start the reverse lookup query as scheduled, and then select **Save** to finish the configuration.
All CIDR formats are supported.
Use a test device to verify that the reverse DNS lookup settings you'd defined work as expected.
-1. On your sensor console, select **System settings**> **Network monitoring** and under **Active Discovery**, select **Reverse DNS Lookup**.
+1. On your sensor console, select **System settings** > **Network monitoring** and under **Active Discovery**, select **Reverse DNS Lookup**.
1. Make sure that the **Enabled** toggle is selected. 1. Select **Test**.
-1. In the **DSN reverse lookup test for server** dialog, enter an address in the **Lookup Address** and then select **Test**.
+1. In the **DNS reverse lookup test for server** dialog, enter an address in the **Lookup Address** and then select **Test**.
## Next steps
-Learn more about active monitoring options. For more information, see:
+For more information, see:
-- [Configure active monitoring for OT networks](configure-active-monitoring.md)-- [Configure Windows Endpoint monitoring](configure-windows-endpoint-monitoring.md)
+- [View your device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+- [View your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
defender-for-iot Configure Sensor Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md
# Configure OT sensor settings from the Azure portal (Public preview)
-After [onboarding](onboard-sensors.md) a new OT network sensor to Microsoft Defender for IoT, you may want to define several settings directly on the OT sensor console, such as [adding local users](manage-users-sensor.md) or [connecting to an on-premises management console](how-to-manage-individual-sensors.md#connect-a-sensor-to-the-management-console).
+After [onboarding](onboard-sensors.md) a new OT network sensor to Microsoft Defender for IoT, you may want to define several settings directly on the OT sensor console, such as [adding local users](manage-users-sensor.md) or [connecting to an on-premises management console](ot-deploy/connect-sensors-to-management.md).
Selected OT sensor settings, listed below, are also available directly from the Azure portal, and can be applied in bulk across multiple cloud-connected OT sensors at a time, or across all OT sensors in a specific site or zone. This article describes how to view and configure view OT network sensor settings from the Azure portal.
Selected OT sensor settings, listed below, are also available directly from the
To define OT sensor settings, make sure that you have the following: -- **An Azure subscription onboarded to Defender for IoT**. If you need to, [sign up for a free account](https://azure.microsoft.com/free/) and then use the [Quickstart: Get started with Defender for IoT](getting-started.md) to onboard.
+- **An Azure subscription onboarded to Defender for IoT**. If you need to, [sign up for a free account](https://azure.microsoft.com/free/), and then use the [Quickstart: Get started with Defender for IoT](getting-started.md) to onboard.
- **Permissions**:
Your new setting is now listed on the **Sensor settings (Preview)** page under i
> [!TIP] > You may want to configure exceptions to your settings for a specific OT sensor or zone. In such cases, create an extra setting for the exception. >
-> Settings override eachother in a hierarchical manner, so that if your setting is applied to a specific OT sensor, it overrides any related settings that are applied to the entire zone or site. To create an exception for an entire zone, add a setting for that zone to override any related settings applied to the entire site.
+> Settings override each other in a hierarchical manner, so that if your setting is applied to a specific OT sensor, it overrides any related settings that are applied to the entire zone or site. To create an exception for an entire zone, add a setting for that zone to override any related settings applied to the entire site.
> ## View and edit current OT sensor settings
For a bandwidth cap, define the maximum bandwidth you want the sensor to use for
**Default**: 1500 Kbps
-**Minimum required for a stable connection to Azure** 350 Kbps. At this minimum setting, connections to the sensor console may be slower than usual.
+**Minimum required for a stable connection to Azure**: 350 Kbps. At this minimum setting, connections to the sensor console may be slower than usual.
### Subnet
-To define your sensor's subnets do any of the following:
+To define your sensor's subnets, do any of the following:
- Select **Import subnets** to import a comma-separated list of subnet IP addresses and masks. Select **Export subnets** to export a list of currently configured data, or **Clear all** to start from scratch. -- Enter values in the **IP Address**, **Mask**,l and **Name** fields to add subnet details manually. Select **Add subnet** to add additional subnets as needed.
+- Enter values in the **IP Address**, **Mask**, and **Name** fields to add subnet details manually. Select **Add subnet** to add more subnets as needed.
### VLAN naming
Select **Add VLAN** to add more VLANs as needed.
> [Manage sensors from the Azure portal](how-to-manage-sensors-on-the-cloud.md) > [!div class="nextstepaction"]
-> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md)
+> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md)
defender-for-iot Configure Windows Endpoint Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-windows-endpoint-monitoring.md
Currently the only protocol supported for Windows Endpoint Monitoring with Defen
## Prerequisites
-Make sure that you've completed the prerequisites listed in [Configure active monitoring for OT networks](configure-active-monitoring.md), and have confirmed that active monitoring is right for your network.
+Before performing the procedures in this article, you must have:
+- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](ot-deploy/activate-deploy-sensor.md).
-Before you can configure a WEM scan from your OT sensor console, you'll also need to configure a firewall rule, and WMI domain scanning on your Windows machine.
+- Access to your OT network sensor as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+- Completed the prerequisites outlined in [Configure active monitoring for OT networks](configure-active-monitoring.md), and confirmed that active monitoring is right for your network.
+
+- Before you can configure a WEM scan from your OT sensor console, you'll also need to configure a firewall rule, and WMI domain scanning on your Windows machine.
## Configure the required firewall rule
Configure a firewall rule that opens outgoing traffic from the sensor to the sca
## Configure WMI domain scanning
-Before you can configure a WEM scan from your sensor, you need to configure WMI domain scanning on the Windows machine you'll be scanning.
+Before you can configure a WEM scan from your sensor, you need to configure WMI domain scanning on the Windows machine you'll be scanning.
This procedure describes how to configure WMI scanning using a Group Policy Object (GPO), updating your firewall settings, defining permissions for your WMI namespace, and defining a local group.
This procedure describes how to configure WMI scanning using a Group Policy Obje
1. In the **Access Permission** dialog:
- 1. In the **Group or user names** list, select **wmiuser**
- 1. In the **Permissions for ANONYMOUS LOGON** box below, select **Allow** for both **Local Access** and **Remote Access**.
+ 1. In the **Group or user names** list, select **wmiuser**.
+ 1. In the **Permissions for ANONYMOUS LOGON** box, select **Allow** for both **Local Access** and **Remote Access**.
Select **OK** to close the **Access Permissions** dialog.
This procedure describes how to configure WMI scanning using a Group Policy Obje
1. In the **Access Permission** dialog:
- 1. In the **Group or user names** list, select **wmiuser**
- 1. In the **Permissions for Administrators** box below, select **Allow** for the **Local Launch**, **Remote Launch**, **Local Activation**, and **Remote Activation** options.
+ 1. In the **Group or user names** list, select **wmiuser**.
+ 1. In the **Permissions for Administrators** box, select **Allow** for the **Local Launch**, **Remote Launch**, **Local Activation**, and **Remote Activation** options.
Select **OK** to close the **Access Permissions** dialog.
This procedure describes how to configure WMI scanning using a Group Policy Obje
### Configure permissions for your WMI namespace
-This procedure describes how to define permissions for your WMI namespace, and cannot be completed with a regular GPO.
+This procedure describes how to define permissions for your WMI namespace, and can't be completed with a regular GPO.
If you'll be using a non-admin account to run your WEM scans, this procedure is critical and must be performed exactly as instructed to allow sign-in attempts using WMI.
If you'll be using a non-admin account to run your WEM scans, this procedure is
1. Select **Add**, and in the **Enter the object names to select** box, enter **wmiuser**. 1. Select **Check Names** > **OK**.
-1. In the **Group or user names** box, select the **wmiuser** account. In the **Permissions for Authenticated Users** box below, select **Allow** for the following permissions:
+1. In the **Group or user names** box, select the **wmiuser** account. In the **Permissions for Authenticated Users** box, select **Allow** for the following permissions:
- **Execute Methods** - **Enable Account**
If you'll be using a non-admin account to run your WEM scans, this procedure is
**To configure a WEM scan**:
-1. On your OT sensor console, select **System settings**> **Network monitoring** > **Active discovery** > **Windows Endpoint Monitoring (WMI)**.
+1. On your OT sensor console, select **System settings** > **Network monitoring** > **Active discovery** > **Windows Endpoint Monitoring (WMI)**.
1. In the **Edit scan ranges configuration** section, enter the ranges you want to scan and add the username and password required to access those resources.
- - We recommend enter values with domain or local administrator privileges for the best scanning results.
- - Select **Import ranges** to import a CSV file with a set of ranges you want to scan. Make sure your CSV file includes the following data: **FROM**, **TO**, **USER**, **PASSWORD**, **DISABLE**, where **DISABLE** is defined as **TRUE**/**FALSE**.
- - To get a csv list of all ranges currently configured for WEM scans, select **Export ranges**.
+ - We recommend entering values with domain or local administrator privileges for the best scanning results.
+ - Select **Import ranges** to import a .csv file with a set of ranges you want to scan. Make sure your .csv file includes the following data: **FROM**, **TO**, **USER**, **PASSWORD**, **DISABLE**, where **DISABLE** is defined as **TRUE**/**FALSE**.
+ - To get a .csv list of all ranges currently configured for WEM scans, select **Export ranges**.
-1. In the **Scan will run** area, define whether you want to run the scan in in intervals, every few hours, or by a specific time. If you select **By specific time**, an additional **Add scan time** option appears, which you can use to configure several scans running at specific times.
+1. In the **Scan will run** area, define whether you want to run the scan in intervals, every few hours, or by a specific time. If you select **By specific time**, an additional **Add scan time** option appears, which you can use to configure several scans running at specific times.
While you can configure your WEM scan to run as often as you like, only one WEM scan can run at a time.
If you'll be using a non-admin account to run your WEM scans, this procedure is
- To run your scan manually now, select **Apply changes** > **Manually scan**.
- - To let your scan run later as configured, select **Apply changes** and then close the pane as needed.
+ - To let your scan run later as configured, select **Apply changes**, and then close the pane as needed.
**To view scan results:**
-1. When your scan is finished, go back to the **System settings**> **Network monitoring** > **Active discovery** > **Windows Endpoint Monitoring (WMI)** page on your sensor console.
+1. When your scan is finished, go back to the **System settings** > **Network monitoring** > **Active discovery** > **Windows Endpoint Monitoring (WMI)** page on your sensor console.
1. Select **View Scan Results**. A .csv file with the scan results is downloaded to your computer. ## Next steps
-Learn more about active monitoring options. For more information, see:
+For more information, see:
+- [View your device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+- [View your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
- [Configure active monitoring for OT networks](configure-active-monitoring.md)-- [Configure DNS servers for reverse lookup resolution for OT monitoring](configure-reverse-dns-lookup.md)
+- [Configure DNS servers for reverse lookup resolution for OT monitoring ┬╗](configure-reverse-dns-lookup.md)
+- [Import device information to a sensor ┬╗](how-to-import-device-information.md)
defender-for-iot Connect Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/connect-sensors.md
Title: Connect OT sensors to the Azure portal - Microsoft Defender for IoT
-description: Learn how to connect your Microsoft Defender for IoT OT sensors to Azure using one of several cloud connection methods.
+ Title: Configure proxy connections from your OT sensor to Azure
+description: Learn how to configure proxy settings on your OT sensors to connect to Azure.
Previously updated : 09/11/2022 Last updated : 03/20/2023
-# Connect your OT sensors to the cloud
+# Configure proxy settings on an OT sensor
-This article describes how to connect your OT network sensors to the Defender for IoT portal in Azure, for OT sensor software versions 22.x and later.
+This article is one in a series of articles describing the [deployment path](ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to configure proxy settings on your OT sensor to connect to Azure.
-For more information about each connection method, see [Sensor connection methods](architecture-connections.md).
-## Prerequisites
+You can skip this step in the following cases:
-To use the connection methods described in this article, you must have an OT network sensor with software version 22.x or later.
+- If you're working in air-gapped environment and locally managed sensors
-For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
+- If you're using a [direct connection](architecture-connections.md#direct-connections) between your OT sensor and Azure. In this case, you've already performed all required steps when you [provisioned your sensor for cloud management](ot-deploy/provision-cloud-management.md)
-## Choose a sensor connection method
+## Prerequisites
-Use this section to help determine which connection method is right for your organization.
+To perform the steps described in this article, you'll need:
-|If ... |... Then |
-|||
-|- You require private connectivity between your sensor and Azure, <br>- Your site is connected to Azure via ExpressRoute, or <br>- Your site is connected to Azure over a VPN | **[Connect via an Azure proxy](#connect-via-an-azure-proxy)** |
-|- Your sensor needs a proxy to reach from the OT network to the cloud, or <br>- You want multiple sensors to connect to Azure through a single point | **[Connect via proxy chaining](#connect-via-proxy-chaining)** |
-|- You want to connect your sensor to Azure directly | **[Connect directly](#connect-directly)** |
-|- You have sensors hosted in multiple public clouds | **[Connect via multicloud vendors](#connect-via-multicloud-vendors)** |
+- An OT sensor [installed](ot-deploy/install-software-ot-sensor.md) and [activated](ot-deploy/activate-deploy-sensor.md)
+
+- An understanding of the [supported connection methods](architecture-connections.md) for cloud-connected Defender for IoT sensors, and a plan for your [OT site deployment](best-practices/plan-prepare-deploy.md) that includes the connection method you want to use for each sensor.
+
+- Access to the OT sensor as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+This step is performed by your deployment and connectivity teams.
## Connect via an Azure proxy
-This section describes how to connect your sensor to Defender for IoT in Azure using an Azure proxy. Use this procedure in the following situations:
+This section describes how to connect your sensor to Defender for IoT in Azure using an [Azure proxy](architecture-connections.md#proxy-connections-with-an-azure-proxy). Use this procedure in the following situations:
- You require private connectivity between your sensor and Azure - Your site is connected to Azure via ExpressRoute - Your site is connected to Azure over a VPN
-For more information, see [Proxy connections with an Azure proxy](architecture-connections.md#proxy-connections-with-an-azure-proxy).
- ### Prerequisites Before you start, make sure that you have: -- An Azure Subscription and an account with **Contributor** permissions to the subscription- - A Log Analytics workspace for monitoring logs - Remote site connectivity to the Azure VNET - A proxy server resource, with firewall permissions to access Microsoft cloud services. The procedure described in this article uses a Squid server hosted in Azure. -- Outbound HTTPS traffic on port 443 enabled to the required endpoints for Defender for IoT. Download the list of required endpoints from the **Sites and sensors** page: Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.- > [!IMPORTANT] > Microsoft Defender for IoT does not offer support for Squid or any other proxy services. It is the customer's responsibility to set up and maintain the proxy service. >
+### Allow outbound traffic to required endpoints
+
+Ensure that outbound HTTPS traffic on port 443 is allowed to from your sensor to the required endpoints for Defender for IoT.
+
+For more information, see [Provision OT sensors for cloud management](ot-deploy/provision-cloud-management.md).
+ ### Configure sensor proxy settings
-If you already have a proxy set up in your Azure VNET, you can start working with a proxy by defining the proxy settings on your sensor console.
+If you already have a proxy set up in your Azure VNET, start by defining the proxy settings on your sensor console:
-1. On your sensor console, go to **System settings > Sensor Network Settings**.
+1. Sign into your OT sensor and select **System settings > Sensor Network Settings**.
1. Toggle on the **Enable Proxy** option and define your proxy host, port, username, and password.
-If you don't yet have a proxy configured in your Azure VNET, use the following procedures to configure your proxy:
+If you don't yet have a proxy configured in your Azure VNET, use the following steps to configure your proxy:
1. [Define a storage account for NSG logs](#step-1-define-a-storage-account-for-nsg-logs)
If you don't yet have a proxy configured in your Azure VNET, use the following p
1. [Create an Azure load balancer](#step-6-create-an-azure-load-balancer) 1. [Configure a NAT gateway](#step-7-configure-a-nat-gateway)
-### Step 1: Define a storage account for NSG logs
+#### Step 1: Define a storage account for NSG logs
In the Azure portal, create a new storage account with the following settings:
In the Azure portal, create a new storage account with the following settings:
|**Data Protection** | Keep all options cleared | |**Advanced** | Keep all default values |
+#### Step 2: Define virtual networks and subnets
-### Step 2: Define virtual networks and subnets
-
-Create the following VNET and contained Subnets:
+Create the following VNET and contained subnets:
|Name |Recommended size | |||
Create the following VNET and contained Subnets:
|- `AzureBastionSubnet` (optional) | /26 | | | |
-### Step 3: Define a virtual or local network gateway
+#### Step 3: Define a virtual or local network gateway
Create a VPN or ExpressRoute Gateway for virtual gateways, or create a local gateway, depending on how you connect your on-premises network to Azure.
For more information, see:
- [Connect a virtual network to an ExpressRoute circuit using the portal](../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md) - [Modify local network gateway settings using the Azure portal](../../vpn-gateway/vpn-gateway-modify-local-network-gateway-portal.md)
-### Step 4: Define network security groups
+#### Step 4: Define network security groups
1. Create an NSG and define the following inbound rules:
For more information, see:
- Select **Enable Traffic Analytics** - Select your Log Analytics workspace
-### Step 5: Define an Azure virtual machine scale set
+#### Step 5: Define an Azure virtual machine scale set
Define an Azure virtual machine scale set to create and manage a group of load-balanced virtual machine, where you can automatically increase or decrease the number of virtual machines as needed.
-Use the following procedure to create a scale set to use with your sensor connection. For more information, see [What are virtual machine scale sets?](../../virtual-machine-scale-sets/overview.md)
+For more information, see [What are virtual machine scale sets?](../../virtual-machine-scale-sets/overview.md)
+
+**To create a scale set to use with your sensor connection**:
1. Create a scale set with the following parameter definitions:
Use the following procedure to create a scale set to use with your sensor connec
- apt-get -y upgrade; [ -e /var/run/reboot-required ] && reboot ```
-### Step 6: Create an Azure load balancer
+#### Step 6: Create an Azure load balancer
Azure Load Balancer is a layer-4 load balancer that distributes incoming traffic among healthy virtual machine instances using a hash-based distribution algorithm. For more information, see the [Azure Load Balancer documentation](../../load-balancer/load-balancer-overview.md).
-To create an Azure load balancer for your sensor connection:
+**To create an Azure load balancer for your sensor connection**:
-1. Create a load balancer with a standard SKU and an **Internal** type to ensure that the load balancer is closed to the internet.
+1. Create a load balancer with a standard SKU and an **Internal** type to ensure that the load balancer is closed to the internet.
-1. Define a dynamic frontend IP address in the `proxysrv` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets), setting the availability to zone-redundant.
+1. Define a dynamic frontend IP address in the `proxysrv` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets), setting the availability to zone-redundant.
-1. For a backend, choose the virtual machine scale set you created in the [earlier](#step-5-define-an-azure-virtual-machine-scale-set).
+1. For a backend, choose the virtual machine scale set you created in the [earlier](#step-5-define-an-azure-virtual-machine-scale-set).
1. On the port defined in the sensor, create a TCP load balancing rule connecting the frontend IP address with the backend pool. The default port is 3128.
To create an Azure load balancer for your sensor connection:
1. Select **Sent to Log Analytics workspace**, and then select your Log Analytics workspace.
-### Step 7: Configure a NAT gateway
+#### Step 7: Configure a NAT gateway
To configure a NAT gateway for your sensor connection:
-1. Create a new NAT Gateway.
+1. Create a new NAT Gateway.
-1. In the **Outbound IP** tab, select **Create a new public IP address**.
+1. In the **Outbound IP** tab, select **Create a new public IP address**.
-1. In the **Subnet** tab, select the `ProxyserverSubnet` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets).
+1. In the **Subnet** tab, select the `ProxyserverSubnet` subnet you created [earlier](#step-2-define-virtual-networks-and-subnets).
+
+Continue by [defining the proxy settings](#configure-sensor-proxy-settings) on your OT sensor.
## Connect via proxy chaining
We've validated this procedure using the open-source [Squid](http://www.squid-ca
> Microsoft Defender for IoT does not offer support for Squid or any other proxy services. It is the customer's responsibility to set up and maintain the proxy service. >
-### Configuration
+### Configuration a proxy chaining connection
This procedure describes how to install and configure a connection between your sensors and Defender for IoT using the latest version of Squid on an Ubuntu server. 1. Define your proxy settings on each sensor:
- 1. On your sensor console, go to **System settings > Sensor Network Settings**.
+ 1. Sign into your OT sensor and select **System settings > Sensor Network Settings**.
1. Toggle on the **Enable Proxy** option and define your proxy host, port, username, and password.
This procedure describes how to install and configure a connection between your
sudo systemctl enable squid ```
-1. Connect your proxy to Defender for IoT:
-
- 1. Download the list of required endpoints from the **Sites and sensors** page: Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
- 1. Enable outbound HTTPS traffic on port 443 from the sensor to each of the required endpoints for Defender for IoT.
--
-> [!IMPORTANT]
-> Some organizations must define firewall rules by IP addresses. If this is true for your organization, it's important to know that the Azure public IP ranges are updated weekly.
->
-> Make sure to download the new JSON file each week and make the required changes on your site to correctly identify services running in Azure. You'll need the updated IP ranges for **AzureIoTHub**, **Storage**, and **EventHub**. See the [latest IP ranges](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
->
-
-## Connect directly
-
-This section describes what you need to configure a direct sensor connection to Defender for IoT in Azure. For more information, see [Direct connections](architecture-connections.md#direct-connections).
-
-1. Download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
-
-1. Ensure that your sensor can access the cloud using HTTPS on port 443 to each of the listed endpoints in the downloaded list.
+1. Connect your proxy to Defender for IoT. Ensure that outbound HTTPS traffic on port 443 is allowed to from your sensor to the required endpoints for Defender for IoT.
-1. Azure public IP addresses are updated weekly. If you must define firewall rules based on IP addresses, make sure to download the new JSON file each week and make the required changes on your site to correctly identify services running in Azure. You'll need the updated IP ranges for **AzureIoTHub**, **Storage**, and **EventHub**. See the [latest IP ranges](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
+ For more information, see [Provision OT sensors for cloud management](ot-deploy/provision-cloud-management.md).
## Connect via multicloud vendors
This section describes how to connect your sensor to Defender for IoT in Azure f
### Prerequisites
-Before you start:
+Before you start, make sure that you have a sensor deployed in a public cloud, such as AWS or Google Cloud, and configured to monitor [SPAN traffic](traffic-mirroring/configure-mirror-span.md).
-- Make sure that you have a sensor deployed in a public cloud, such as AWS or Google Cloud, and configured to monitor SPAN traffic.
+### Select a multi-cloud connectivity method
-- Choose the multicloud connectivity method that's right for your organization:
+Use the following flow chart to determine which connectivity method to use:
- Use the following flow chart to determine which connectivity method to use:
- :::image type="content" source="media/architecture-connections/multicloud-flow-chart.png" alt-text="Flow chart to determine which connectivity method to use.":::
+- **Use public IP addresses over the internet** if you don't need to exchange data using private IP addresses
- - **Use public IP addresses over the internet** if you don't need to exchange data using private IP addresses
+- **Use site-to-site VPN over the internet** only if you don't* require any of the following:
- - **Use site-to-site VPN over the internet** only if you don't* require any of the following:
+ - Predictable throughput
+ - SLA
+ - High data volume transfers
+ - Avoid connections over the public internet
- - Predictable throughput
- - SLA
- - High data volume transfers
- - Avoid connections over the public internet
+- **Use ExpressRoute** if you require predictable throughput, SLA, high data volume transfers, or to avoid connections over the public internet.
- - **Use ExpressRoute** if you require predictable throughput, SLA, high data volume transfers, or to avoid connections over the public internet.
+ In this case:
- In this case:
-
- - If you want to own and manage the routers making the connection, use ExpressRoute with customer-managed routing.
- - If you don't need to own and manage the routers making the connection, use ExpressRoute with a cloud exchange provider.
+ - If you want to own and manage the routers making the connection, use ExpressRoute with customer-managed routing.
+ - If you don't need to own and manage the routers making the connection, use ExpressRoute with a cloud exchange provider.
### Configuration
Before you start:
1. To enable private connectivity between your VPCs and Defender for IoT, connect your VPC to an Azure VNET over a VPN connection. For example if you're connecting from an AWS VPC, see our TechCommunity blog: [How to create a VPN between Azure and AWS using only managed solutions](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-to-create-a-vpn-between-azure-and-aws-using-only-managed/ba-p/2281900).
-1. After your VPC and VNET are configured, connect to Defender for IoT as you would when connecting via an Azure proxy. For more information, see [Connect via an Azure proxy](#connect-via-an-azure-proxy).
-
-## Migration for existing customers
-
-If you're an existing customer with a production deployment and sensors connected using the legacy IoT Hub method, start with the following steps to ensure a full and safe migration to an updated connection method.
-
-1. **Review your existing production deployment** and how sensors are currently connection to Azure. Confirm that the sensors in production networks can reach the Azure data center resource ranges.
-
-1. **Determine which connection method is right** for each production site. For more information, see [Choose a sensor connection method](connect-sensors.md#choose-a-sensor-connection-method).
+1. After your VPC and VNET are configured, connect to Defender for IoT as you would when [connecting via an Azure proxy](#connect-via-an-azure-proxy).
-1. **Configure any other resources required** as described in the procedure in this article for your chosen connectivity method. For example, other resources might include a proxy, VPN, or ExpressRoute.
-
- For any connectivity resources outside of Defender for IoT, such as a VPN or proxy, consult with Microsoft solution architects to ensure correct configurations, security, and high availability.
-
-1. **If you have legacy sensor versions installed**, we recommend that you update your sensors at least to a version 22.1.x or higher. In this case, make sure that you've updated your firewall rules and activated your sensor with a new activation file.
-
- Sign in to each sensor after the update to verify that the activation file was applied successfully. Also check the Defender for IoT **Sites and sensors** page in the Azure portal to make sure that the updated sensors show as **Connected**.
-
- For more information, see [Update OT system software](update-ot-software.md) and [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
-
-1. **Start migrating with a test lab or reference project** where you can validate your connection and fix any issues found.
-
-1. **Create a plan of action for your migration**, including planning any maintenance windows needed.
-
-1. **After the migration in your production environment**, you can delete any previous IoT Hubs that you had used before the migration. Make sure that any IoT Hubs you delete aren't used by any other
-
- - If you've upgraded your versions, make sure that all updated sensors indicate software version 22.1.x or higher.
+## Next steps
- - Check the active resources in your account and make sure there are no other services connected to your IoT Hub.
+We recommend that you configure an Active Directory connection for managing on-premises users on your OT sensor, and also setting up sensor health monitoring via SNMP.
- - If you're running a hybrid environment with multiple sensor versions, make sure any sensors with software version 22.1.x can connect to Azure. Use firewall rules that allow outbound HTTPS traffic on port 443 to each of the required endpoints.
+If you don't configure these settings during deployment, you can also return and configure them later on. For more information, see:
- Find the list of required endpoints for Defender for IoT from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
+- [Set up SNMP MIB monitoring on an OT sensor](how-to-set-up-snmp-mib-monitoring.md)
+- [Configure an Active Directory connection](manage-users-sensor.md#configure-an-active-directory-connection)
-While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
+> [!div class="step-by-step"]
+> [« Activate and set up your OT network sensor](ot-deploy/activate-deploy-sensor.md)
-## Next steps
+> [!div class="step-by-step"]
+> [Control the OT traffic monitored by Microsoft Defender for IoT ┬╗](how-to-control-what-traffic-is-monitored.md)
-For more information, see [Sensor connection methods](architecture-connections.md).
defender-for-iot Detect Windows Endpoints Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md
The script described in this article returns the following details about each de
- Installed programs - Last knowledge base update
-For more information, see [Configure Windows Endpoint Monitoring](configure-windows-endpoint-monitoring.md).
+If an OT network sensor has already learned the device, running the script outlined in this article retrieves the device's information and enrichment data.
-## Supported operating systems
+## Prerequisites
+
+Before performing the procedures in this article, you must have:
+
+- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](ot-deploy/activate-deploy-sensor.md), and monitoring the network where your device is connected.
+
+- Access to your OT network sensor as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+- Administrator permissions on any devices where you intend to run the script.
+
+### Supported operating systems
The script described in this article is supported for the following Windows operating systems:
The script described in this article is supported for the following Windows oper
- Windows 10 - Windows Server 2003/2008/2012/2016/2019
-## Prerequisites
-
-Before you start, make sure that you have:
--- Administrator permissions on any devices where you intend to run the script-- A Defender for IoT OT sensor already monitoring the network where the device is connected-
-If an OT network sensor has already learned the device, running the script will retrieve its information and enrichment data.
- ## Run the script This procedure describes how to obtain, deploy, and run the script on the Windows workstation and servers that you want to monitor in Defender for IoT.
The script you run to detect enriched Windows data is run as a utility and not a
Files generated by the script: - Remain on the local drive until you delete them.-- Must remain in the same location. Do not separate the generated files.
+- Must remain in the same location. Don't separate the generated files.
- Are overwritten if you run the script again. ## Import device details
After having run the script as described [earlier](#run-the-script), import the
**To import device details to your sensor**:
-1. Use standard, automated methods and tools to move the generated files from each Windows endpoint to a location accessible from your OT sensors.
+1. Use standard, automated methods and tools to move the generated files from each Windows endpoint to a location accessible from your OT sensors.
- Do not update filenames or separate the files from each other.
+ Don't update filenames or separate the files from each other.
-1. On your OT sensor console, select **System Settings** > **Import Settings** > **Windows Information**.
+1. Sign into your OT sensor console, and select **System Settings** > **Import Settings** > **Windows Information**.
1. Select **Import File**, and then select all the files (Ctrl+A).
-1. Select **Close**. The device registry information is imported and a successful confirmation message is shown
+1. Select **Close**. The device registry information is imported and a successful confirmation message is shown.
If there's a problem uploading one of the files, you'll be informed which file upload failed. ## Next steps
-For more information, see [View detected devices on-premises](how-to-investigate-sensor-detections-in-a-device-inventory.md).
+For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) and [Import extra data for detected OT devices](how-to-import-device-information.md).
+
defender-for-iot Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md
For example:
:::image type="content" source="media/device-inventory/azure-device-inventory.png" alt-text="Screenshot of the Defender for IoT Device inventory page in the Azure portal." lightbox="media/device-inventory/azure-device-inventory.png"::: +
+## Supported devices
+
+Defender for IoT's device inventory supports the following device classes:
+
+|Devices |For example ... |
+|||
+|**Manufacturing**| Industrial and operational devices, such as pneumatic devices, packaging systems, industrial packaging systems, industrial robots |
+|**Building** | Access panels, surveillance devices, HVAC systems, elevators, smart lighting systems |
+|**Health care** | Glucose meters, monitors |
+|**Transportation / Utilities** | Turnstiles, people counters, motion sensors, fire and safety systems, intercoms |
+|**Energy and resources** | DCS controllers, PLCs, historian devices, HMIs |
+|**Endpoint devices** | Workstations, servers, or mobile devices |
+| **Enterprise** | Smart devices, printers, communication devices, or audio/video devices |
+| **Retail** | Barcode scanners, humidity sensor, punch clocks |
+
+A *transient* device type indicates a device that was detected for only a short time. We recommend investigating these devices carefully to understand their impact on your network.
+
+*Unclassified* devices are devices that don't otherwise have an out-of-the-box category defined.
+ ## Device management options The Defender for IoT device inventory is available in the Azure portal, OT network sensor consoles, and the on-premises management console.
For more information, see:
> - [Defender for Endpoint device discovery](/microsoft-365/security/defender-endpoint/device-discovery) >
-## Supported devices
+## Automatically consolidated devices
-Defender for IoT's device inventory supports device types across a variety of industries and fields.
+When you've deployed Defender for IoT at scale, with several OT sensors, each sensor might detect different aspects of the same device. To prevent duplicated devices in your device inventory, Defender for IoT assumes that any devices found in the same zone, with a logical combination of similar characteristics, is the same device. Defender for IoT automatically consolidates these devices and lists them only once in the device inventory.
-|Devices |For example ... |
-|||
-|**Manufacturing**| Industrial and operational devices, such as pneumatic devices, packaging systems, industrial packaging systems, industrial robots |
-|**Building** | Access panels, surveillance devices, HVAC systems, elevators, smart lighting systems |
-|**Health care** | Glucose meters, monitors |
-|**Transportation / Utilities** | Turnstiles, people counters, motion sensors, fire and safety systems, intercoms |
-|**Energy and resources** | DCS controllers, PLCs, historian devices, HMIs |
-|**Endpoint devices** | Workstations, servers, or mobile devices |
-| **Enterprise** | Smart devices, printers, communication devices, or audio/video devices |
-| **Retail** | Barcode scanners, humidity sensor, punch clocks |
+For example, any devices with the same IP and MAC address detected in the same zone are consolidated and identified as a single device in the device inventory. If you have separate devices from recurring IP addresses that are detected by multiple sensors, you'll want each of these devices to be identified separately. In such cases, [onboard your OT sensors](onboard-sensors.md) to different zones so that each device is identified as a separate and unique device, even if they have the same IP address. Devices that have the same MAC addresses, but different IP addresses are not merged, and continue to be listed as unique devices.
A *transient* device type indicates a device that was detected for only a short time. We recommend investigating these devices carefully to understand their impact on your network. *Unclassified* devices are devices that don't otherwise have an out-of-the-box category defined.
+> [!TIP]
+> Define [sites and zones](best-practices/plan-corporate-monitoring.md#plan-ot-sites-and-zones) in Defender for IoT to harden overall network security, follow principles of [Zero Trust](/security/zero-trust/), and gain clarity in the data detected by your sensors.
+>
## Unauthorized devices
defender-for-iot Eiot Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-defender-for-endpoint.md
This procedure describes how to view related alerts, recommendations, and vulner
- On the **Discovered vulnerabilities** tab, check for any known CVEs associated with the device. Known CVEs can help decide whether to patch, remove, or contain the device and mitigate risk to your network. -- ## Next steps Learn how to set up an Enterprise IoT network sensor (Public preview) and gain more visibility into more IoT segments of your corporate network that aren't otherwise covered by Defender for Endpoint.
Learn how to set up an Enterprise IoT network sensor (Public preview) and gain m
Customers that have set up an Enterprise IoT network sensor will be able to see all discovered devices in the **Device inventory** in either Microsoft 365 Defender, or Defender for IoT in the Azure portal. > [!div class="nextstepaction"]
-> [Enhance device discovery with an Enterprise IoT network sensor](eiot-sensor.md)
+> [Enhance device discovery with an Enterprise IoT network sensor](eiot-sensor.md)
defender-for-iot Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md
For more information, see [Securing IoT devices in the enterprise](concept-enter
## Prerequisites
-Before you start registering an Enterprise IoT sensor:
+This section describes the prerequisites required before deploying an Enterprise IoT network sensor.
-- To view Defender for IoT data in Microsoft 365 Defender, including devices, alerts, recommendations, and vulnerabilities, you must have an Enterprise IoT plan, [onboarded from Microsoft 365 Defender](eiot-defender-for-endpoint.md).
+### Azure requirements
+
+- To view Defender for IoT data in Microsoft 365 Defender, including devices, alerts, recommendations, and vulnerabilities, you must have an Enterprise IoT plan, [onboarded from Microsoft 365 Defender](eiot-defender-for-endpoint.md).
If you only want to view data in the Azure portal, an Enterprise IoT plan isn't required. You can also onboard your Enterprise IoT plan from Microsoft 365 Defender after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#security-value-in-microsoft-365-defender) to your organization. - Make sure you can access the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
+### Network requirements
+
+- Identify the devices and subnets you want to monitor so that you understand where to place an Enterprise IoT sensor in your network. You may want to deploy multiple Enterprise IoT sensors.
+
+- Configure traffic mirroring in your network so that the traffic you want to monitor is mirrored to your Enterprise IoT sensor. Supported traffic mirroring methods are the same as for OT monitoring. For more information, see [Choose a traffic mirroring method for traffic monitoring](best-practices/traffic-mirroring-methods.md).
-- Allocate a physical appliance or a virtual machine (VM) to use as your network sensor. Make sure that your machine has the following specifications:
+### Physical or virtual machine requirements
- | Tier | Requirements |
- |--|--|
- | **Minimum** | To support up to 1 Gbps of data: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16-GB RAM of DDR4 or better<br>- 250 GB HDD |
- | **Recommended** | To support up to 15 Gbps of data: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32-GB RAM of DDR4 or better<br>- 500 GB HDD |
+Allocate a physical appliance or a virtual machine (VM) to use as your network sensor. Make sure that your machine has the following specifications:
- Your machine must also have:
+| Tier | Requirements |
+|--|--|
+| **Minimum** | To support up to 1 Gbps of data: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16-GB RAM of DDR4 or better<br>- 250 GB HDD |
+| **Recommended** | To support up to 15 Gbps of data: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32-GB RAM of DDR4 or better<br>- 500 GB HDD |
- - The [Ubuntu 18.04 Server](https://releases.ubuntu.com/18.04/) operating system. If you don't yet have Ubuntu installed, download the installation files to an external storage, such as a DVD or disk-on-key, and then install it on your appliance or VM. For more information, see the Ubuntu [Image Burning Guide](https://help.ubuntu.com/community/BurningIsoHowto).
+Your machine must also have:
- - Network adapters, at least one for your switch monitoring (SPAN) port, and one for your management port to access the sensor's user interface
+- The [Ubuntu 18.04 Server](https://releases.ubuntu.com/18.04/) operating system. If you don't yet have Ubuntu installed, download the installation files to an external storage, such as a DVD or disk-on-key, and then install it on your appliance or VM. For more information, see the Ubuntu [Image Burning Guide](https://help.ubuntu.com/community/BurningIsoHowto).
+
+- Network adapters, at least one for your switch monitoring (SPAN) port, and one for your management port to access the sensor's user interface
+
+Your Enterprise IoT sensor must have access to the Azure cloud using a [direct connection](architecture-connections.md#direct-connections). Direct connections are configured for Enterprise IoT sensors using the same procedure as for OT sensors. For more information, see [Provision sensors for cloud management](ot-deploy/provision-cloud-management.md).
## Prepare a physical appliance or VM
This procedure describes how to prepare your physical appliance or VM to install
| HTTPS | TCP | In/Out | 443 | Cloud connection | | DNS | TCP/UDP | In/Out | 53 | Address resolution | - 1. Make sure that your physical appliance or VM can access the cloud using HTTPS on port 443 to the following Microsoft endpoints: - **EventHub**: `*.servicebus.windows.net`
Delete a sensor if it's no longer in use with Defender for IoT.
1. From the **Sites and sensors** page on the Azure portal, locate your sensor in the grid.
-1. In the row for your sensor, select the **...** options menu on the right > **Delete sensor**.
+1. In the row for your sensor, select the **...** options menu > **Delete sensor**.
For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
Billing changes will take effect one hour after cancellation of the previous sub
1. Delete the legacy sensor from the previous subscription. In Defender for IoT, go to the **Sites and sensors** page and locate the legacy sensor on the previous subscription.
-1. In the row for your sensor, from the options (**...**) menu on the right, select **Delete** to delete the sensor from the previous subscription.
+1. In the row for your sensor, from the options (**...**) menu, select **Delete** to delete the sensor from the previous subscription.
1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
Billing changes will take effect one hour after cancellation of the previous sub
- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md). For more information, see [Malware engine alerts](alert-engine-messages.md#malware-engine-alerts). -- [Enhance security posture with security recommendations](recommendations.md)
+- [Enhance security posture with security recommendations](recommendations.md)
defender-for-iot Faqs Ot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md
You can change user passwords or recover access to privileged users on both the
## How do I activate the sensor and on-premises management console
-For information on how to activate your sensor, see [Sign in and activate the sensor](how-to-activate-and-set-up-your-sensor.md#sign-in-and-activate-the-sensor).
-
-For information on how to activate your on-premises management console, see [Activate the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#activate-the-on-premises-management-console).
+For information on how to activate your on-premises management console, see [Activate and set up an on-premises management console](ot-deploy/activate-deploy-management.md).
## How to change the network configuration Change network configuration settings before or after you activate your sensor using either of the following options: -- **From the sensor UI**: [Update the sensor network configuration](how-to-manage-individual-sensors.md#update-the-sensor-network-configuration)
+- **From the sensor UI**: [Update the OT sensor network configuration](how-to-manage-individual-sensors.md#update-the-ot-sensor-network-configuration)
- **From the sensor CLI**: [Network configuration](cli-ot-sensor.md#network-configuration) For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md), [Getting started with advanced CLI commands](references-work-with-defender-for-iot-cli-commands.md), and [CLI command reference from OT network sensors](cli-ot-sensor.md).
For more information, see [Activate and set up your sensor](how-to-activate-and-
After installing the software for your sensor or on-premises management console, you'll want to perform the [Post-installation validation](ot-deploy/post-install-validation-ot-software.md).
-You can also use our [UI and CLI tools](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) to check system health and review your overall system statistics.
+You can also use our [UI and CLI tools](how-to-troubleshoot-sensor.md#check-system-health) to check system health and review your overall system statistics.
-For more information, see [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md).
+For more information, see [Troubleshoot the sensor](how-to-troubleshoot-sensor.md) and [Troubleshoot the on-premises management console](how-to-troubleshoot-on-premises-management-console.md).
## Next steps
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Last updated 12/25/2022
-# Quickstart: Get started with OT network security monitoring
+# Add an OT plan to your Azure subscription
-This quickstart describes how to set up a trial plan for OT security monitoring with Microsoft Defender for IoT.
+This article describes how to set up a trial plan for OT security monitoring with Microsoft Defender for IoT.
-A trial plan for OT monitoring provides 30-day support for 1000 devices. Use this trial with a [virtual sensor](tutorial-onboarding.md) or on-premises sensors to monitor traffic, analyze data, generate alerts, understand network risks and vulnerabilities, and more.
+A trial plan for OT monitoring provides 30-day support for 1000 devices. You might want to use this trial with a [virtual sensor](tutorial-onboarding.md) or on-premises sensors to monitor traffic, analyze data, generate alerts, understand network risks and vulnerabilities, and more.
## Prerequisites
Before you start, make sure that you have:
- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). For more information, see [Azure user roles for OT and Enterprise IoT monitoring with Defender for IoT](roles-azure.md).
-## Identify and plan your OT solution architecture
+- A plan for your Defender for IoT deployment, such as any system requirements, [traffic mirroring](best-practices/traffic-mirroring-methods.md), any [SSL/TLS certificates](ot-deploy/create-ssl-certificates.md), and so on. For more information, see [Plan your OT monitoring system](best-practices/plan-corporate-monitoring.md).
-We recommend that you identify system requirements and plan your OT network monitoring architecture before you start, even if you're starting with a trial subscription.
+ If you want to use on-premises sensors, make sure that you have the [hardware appliances](ot-appliance-sizing.md) for those sensors and any administrative user permissions.
-- Make sure that you have network switches that support [traffic monitoring](best-practices/traffic-mirroring-methods.md) via a SPAN port and TAPs (Test Access Points).--- Research your own network architecture and decide which and how much data you'll want to monitor. Check any requirements for creating certificates and other details, and [understand where on your network](best-practices/understand-network-architecture.md) you'll want to place your OT network sensors.--- If you want to use on-premises sensors, make sure that you have the [hardware appliances](ot-appliance-sizing.md) for those sensors and any administrative user permissions.-
-For more information, see the [OT monitoring predeployment checklist](pre-deployment-checklist.md).
-
-## Add a trial Defender for IoT plan for OT networks
+## Add a trial plan
This procedure describes how to add a trial Defender for IoT plan for OT networks to an Azure subscription.
Your new plan is listed under the relevant subscription on the **Plans and prici
## Next steps
-> [!div class="nextstepaction"]
-> [Onboard and activate a virtual OT sensor](tutorial-onboarding.md)
-
-> [!div class="nextstepaction"]
-> [Use a pre-configure physical appliance](ot-pre-configured-appliances.md)
-
-> [!div class="nextstepaction"]
-> [Understand Defender for IoT subscription billing](billing.md)
-
-> [!div class="nextstepaction"]
-> [Defender for IoT pricing](https://azure.microsoft.com/pricing/details/iot-defender/)
+> [!div class="step-by-step"]
+> [Defender for IoT OT deployment path ┬╗](ot-deploy/ot-deploy-path.md)
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
Title: Activate and set up your on-premises management console
-description: Activating the management console ensures that sensors are registered with Azure and send information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors.
+description: Activating the management console ensures that sensors are registered with Azure and sending information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors.
Last updated 06/06/2022
To sign in to the on-premises management console:
1. Enter the username and password you received for the on-premises management console during the system installation.
-If you forgot your password, select the **Recover Password** option. See [Password recovery](how-to-manage-the-on-premises-management-console.md#password-recovery) for instructions on how to recover your password.
-
+If you forgot your password, select the **Recover Password** option.
## Activate the on-premises management console
-After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file. Activation files on the on-premises management console enforces the number of committed devices configured for your subscription and Defender for IoT plan. For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
+After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file. Activation files on the on-premises management console enforce the number of committed devices configured for your subscription and Defender for IoT plan. For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
**To activate the on-premises management console**:
After activating an on-premises management console, you'll need to apply new act
|Location |Activation process | ||| |**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks) in your subscription. |
-|**Cloud-connected and locally-managed sensors** | Cloud-connected and locally-managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. |
+|**Cloud-connected and locally managed sensors** | Cloud-connected and locally managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor. |
For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md).
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
Administrators who sign in for the first time should verify that they have acces
### First-time sign in and activation checklist
-Before signing in to the sensor console, administrator users should have access to:
+Before administrators sign in to the sensor console, administrator users should have access to:
- The sensor IP address that was defined during the installation.
The console supports the following certificate types:
> [!IMPORTANT] > We recommend that you don't use the default self-signed certificate. The certificate is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks.
-For more information about working with certificates, see [Manage certificates](how-to-manage-individual-sensors.md#manage-certificates).
- ### Sign in and activate the sensor **To sign in and activate:**
For information about uploading a new certificate, supported certificate paramet
### Activation expirations
-After activating a sensor, cloud-connected and locally-managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active.
+After you've activated your sensor, cloud-connected and locally managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active.
-If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor. For more information, see [Update legacy OT sensor software](update-ot-software.md#update-legacy-ot-sensor-software).
+If you're updating an OT sensor from a legacy version, you'll need to re-activate your updated sensor.
For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md).
You can access console tools from the side menu. Tools help you:
| Tools| Description | | --|--| | Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. |
-| Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zoom, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate devices on a device map](how-to-work-with-the-sensor-device-map.md) |
+| Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zooms, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate devices on a device map](how-to-work-with-the-sensor-device-map.md) |
| Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md).| | Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that requires your attention. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md).|
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
Title: Control what traffic is monitored
-description: Sensors automatically perform deep packet detection for IT and OT traffic and resolve information about network devices, such as device attributes and network behavior. Several tools are available to control the type of traffic that each sensor detects.
Previously updated : 06/02/2022
+ Title: Control the OT traffic monitored by Microsoft Defender for IoT
+description: Learn how to control the OT network traffic monitored by Microsoft Defender for IoT.
Last updated : 01/24/2023
-# Control what traffic is monitored
+# Control the OT traffic monitored by Microsoft Defender for IoT
-Sensors automatically perform deep packet detection for IT and OT traffic and resolve information about network devices, such as device attributes and behavior. Several tools are available to control the type of traffic that each sensor detects.
+This article is one in a series of articles describing the [deployment path](ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
-## Analytics and self-learning engines
-Engines identify security issues via continuous monitoring and five analytics engines that incorporate self-learning to eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies. The five engines are:
+Microsoft Defender for IoT OT network sensors automatically run deep packet detection for IT and OT traffic, resolving network device data, such as device attributes and behavior.
-- **Protocol violation detection**: Identifies the use of packet structures and field values that violate ICS protocol specifications.
+After installing, activating, and configuring your OT network sensor, use the tools described in this article to control the type of traffic detected by each OT sensor, how it's identified, and what's included in Defender for IoT alerts.
-- **Policy violation detection**: Identifies policy violations such as unauthorized use of function codes, access to specific objects, or changes to device configuration.
+## Prerequisites
-- **Industrial malware detection**: Identifies behaviors that indicate the presence of known malware such as Conficker, Black Energy, Havex, WannaCry, and NotPetya.
+Before performing the procedures in this article, you must have:
-- **Anomaly detection**: Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the engine uses a patented technique called Industrial Finite State Modeling (IFSM). The solution requires a shorter learning period than generic mathematical approaches or analytics, which were originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives.
+- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](ot-deploy/activate-deploy-sensor.md)
-- **Operational incident detection**: Identifies operational issues such as intermittent connectivity that can indicate early signs of equipment failure.
+- Access to your OT network sensor and on-premises management console as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-## Learning and Smart IT Learning modes
+This step is performed by your deployment teams.
-The Learning mode instructs your sensor to learn your network's usual activity. Examples are devices discovered in your network, protocols detected in the network, file transfers between specific devices, and more. This activity becomes your network baseline.
+## Define OT and IoT subnets
-The Learning mode is automatically enabled after installation and will remain enabled until turned off. The approximate learning mode period is between two to six weeks, depending on the network size and complexity. After this period, when the learning mode is disabled, any new activity detected will trigger alerts. Alerts are triggered when the policy engine discovers deviations from your learned baseline.
+Subnet configurations affect how devices are displayed in the sensor's [device maps](how-to-work-with-the-sensor-device-map.md). In the device maps, IT devices are automatically aggregated by subnet, where you can expand and collapse each subnet view to drill down as needed.
-After the learning period is complete and the Learning mode is disabled, the sensor might detect an unusually high level of baseline changes that are the result of normal IT activity, such as DNS and HTTP requests. The activity is called nondeterministic IT behavior. The behavior might also trigger unnecessary policy violation alerts and system notifications. To reduce the amount of these alerts and notifications, you can enable the **Smart IT Learning** function.
+While the OT network sensor automatically learns the subnets in your network, we recommend confirming the learned settings and updating them as needed to optimize your map views.
-When Smart IT Learning is enabled, the sensor tracks network traffic that generates nondeterministic IT behavior based on specific alert scenarios.
+Any subnets not listed as subnets are treated as external networks.
-The sensor monitors this traffic for seven days. If it detects the same nondeterministic IT traffic within the seven days, it continues to monitor the traffic for another seven days. When the traffic isn't detected for a full seven days, Smart IT Learning is disabled for that scenario. New traffic detected for that scenario will only then generate alerts and notifications.
-
-Working with Smart IT Learning helps you reduce the number of unnecessary alerts and notifications caused by noisy IT scenarios.
-
-If your sensor is controlled by the on-premises management console, you can't disable the learning modes. In cases like this, the learning mode can only be disabled from the management console.
-
-The learning capabilities (Learning and Smart IT Learning) are enabled by default.
-
-**To enable or disable learning:**
-
-1. Select **System settings** > **Network monitoring** > **Detection Engines and Network Modeling**.
-1. Enable or disable the **Learning** and **Smart IT Learning** options.
--
-## Configure subnets
-
-Subnet configurations affect how you see devices in the device map.
-
-By default, the sensor discovers your subnet setup and populates the **Subnet Configuration** dialog box with this information.
-
-To enable focus on the OT devices, IT devices are automatically aggregated by subnet in the device map. Each subnet is presented as a single entity on the map, including an interactive collapsing/expanding capability to "drill down" into an IT subnet and back.
-
-When you're working with subnets, select the ICS subnets to identify the OT subnets. You can then focus the map view on OT and ICS networks and collapse to a minimum the presentation of IT network elements. This effort reduces the total number of the devices shown on the map and provides a clear picture of the OT and ICS network elements.
--
-You can change the configuration or change the subnet information manually by exporting the discovered data, changing it manually, and then importing back the list of subnets that you manually defined. For more information about export and import, see [Import device information](how-to-import-device-information.md).
-
-In some cases, such as environments that use public ranges as internal ranges, you can instruct the sensor to resolve all subnets as internal subnets by selecting the **Do Not Detect Internet Activity** option. When you select that option:
--- Public IP addresses will be treated as local addresses.--- No alerts will be sent about unauthorized internet activity, which reduces notifications and alerts received on external addresses.-
-**To configure subnets:**
-
-1. On the side menu, select **System Settings**.
-
-1. Select **Basic**, and then select **Subnets**.
-
-3. To add subnets automatically when new devices are discovered, keep the **Auto Subnets Learning** checkbox selected.
-
-4. To resolve all subnets as internal subnets, select **Resolve all internet traffic as internal/private**. Public IPs will be treated as private local addresses. No alerts will be sent about unauthorized internet activity.
-
-5. Select **Add subnet** and define the following parameters for each subnet:
+> [!NOTE]
+> For cloud-connected sensors, you may eventually start configuring OT sensor settings from the Azure portal. Once you start configuring settings from the Azure portal, the **Subnets** pane on the OT sensor is read-only. For more information, see [Configure OT sensor settings from the Azure portal](configure-sensor-settings-portal.md).
- - The subnet IP address.
- - The subnet mask address.
- - The subnet name. We recommend that you name each subnet with a meaningful name that you can easily identify, so you can differentiate between IT and OT networks. The name can be up to 60 characters.
+**To define subnets**:
-6. To mark this subnet as an OT subnet, select **ICS Subnet**.
+1. Sign into your OT sensor as an **Admin** user and select **System settings > Basic > Subnets**.
-7. To present the subnet separately when you're arranging the map according to the Purdue level, select **Segregated**.
+1. Confirm the current subnets listed and modify settings as needed.
-9. To delete all subnets, select **Clear All**.
+ We recommend giving each subnet a meaningful name to differentiate between IT and OT networks. Subnet names can have up to 60 characters.
-10. To export configured subnets, select **Export**. The subnet table is downloaded to your workstation.
+1. Use any of the following options to help you optimize your subnet settings:
-11. Select **Save**.
+ |Name |Description |
+ |||
+ |**Import subnets** | Import a .CSV file of subnet definitions |
+ |**Export subnets** | Export the currently listed subnets to a .CSV file. |
+ |**Clear all** | Clear all currently defined subnets |
+ |**Auto subnet learning** | Selected by default. Clear this option to define your subnets manually instead of having them be automatically detected by your OT sensor as new devices are detected. |
+ |**Resolve all Internet traffic as internal/private** | Select to consider all public IP addresses as private, local addresses. If selected, public IP addresses are treated as local addresses, and alerts aren't sent about unauthorized internet activity. <br><br>This option reduces notifications and alerts received about external addresses. |
+ |**ICS Subnet** | Select to define a specific subnet as a separate OT subnet. Selecting this option helps you collapse device maps to a minimum of IT network elements. |
+ |**Segregated** | Select to show this subnet separately when displaying the device map according to Purdue level. |
-### Importing information
+1. When you're done, select **Save** to save your updates.
-To import subnet information, select **Import** and select a CSV file to import. The subnet information is updated with the information that you imported. If you import an empty field, you'll lose your data.
+## Customize port and VLAN names
-## Detection engines
+Use the following procedures to enrich the device data shown in Defender for IoT by customizing port and VLAN names on your OT network sensors.
-Self-learning analytics engines eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies, malware, operational problems, protocol violations, and baseline network activity deviations.
+For example, you might want to assign a name to a non-reserved port that shows unusually high activity in order to call it out, or to assign a name to a VLAN number in order to identify it quicker.
> [!NOTE]
-> We recommend that you enable all the security engines.
-
-When an engine detects a deviation, an alert is triggered. You can view and manage alerts from the alert screen or from a partner system.
-
-The name of the engine that triggered the alert appears under the alert title.
+> For cloud-connected sensors, you may eventually start configuring OT sensor settings from the Azure portal. Once you start configuring settings from the Azure portal, the **VLANs** and **Port naming** panes on the OT sensors are read-only. For more information, see [Configure OT sensor settings from the Azure portal](configure-sensor-settings-portal.md).
-### Protocol violation engine
+### Customize names of detected ports
-A protocol violation occurs when the packet structure or field values don't comply with the protocol specification.
+Defender for IoT automatically assigns names to most universally reserved ports, such as DHCP or HTTP. However, you might want to customize the name of a specific port to highlight it, such as when you're watching a port with unusually high detected activity.
-Example scenario:
-*"Illegal MODBUS Operation (Function Code Zero)"* alert. This alert indicates that a primary device sent a request with function code 0 to a secondary device. This action isn't allowed according to the protocol specification, and the secondary device might not handle the input correctly.
+Port names are shown in Defender for IoT when viewing device groups from the OT sensor's [device map](how-to-work-with-the-sensor-device-map.md), or when you create OT sensor reports that include port information.
-### Policy violation engine
+**To customize a port name:**
-A policy violation occurs with a deviation from baseline behavior defined in learned or configured settings.
+1. Sign into your OT sensor as an **Admin** user.
-Example scenario:
-*"Unauthorized HTTP User Agent"* alert. This alert indicates that an application that wasn't learned or approved by policy is used as an HTTP client on a device. This might be a new web browser or application on that device.
+1. Select **System settings** and then, under **Network monitoring**, select **Port Naming**.
-### Malware engine
+1. In the **Port naming** pane that appears, enter the port number you want to name, the port's protocol, and a meaningful name. Supported protocol values include: **TCP**, **UDP**, and **BOTH**.
-The Malware engine detects malicious network activity.
+1. Select **+ Add port** to customize another port, and **Save** when you're done.
-Example scenario:
-*"Suspicion of Malicious Activity (Stuxnet)"* alert. This alert indicates that the sensor detected suspicious network activity known to be related to the Stuxnet malware. This malware is an advanced persistent threat aimed at industrial control and SCADA networks.
+### Customize a VLAN name
-### Anomaly engine
+VLANs are either discovered automatically by the OT network sensor or added manually. Automatically discovered VLANs can't be edited or deleted, but manually added VLANs require a unique name. If a VLAN isn't explicitly named, the VLAN's number is shown instead.
-The Anomaly engine detects anomalies in network behavior.
+VLAN's support is based on 802.1q (up to VLAN ID 4094).
-Example scenario:
-*"Periodic Behavior in Communication Channel"* alert. The component inspects network connections and finds periodic and cyclic behavior of data transmission. This behavior is common in industrial networks.
+> [!NOTE]
+> VLAN names aren't synchronized between the OT network sensor and the on-premises management console. If you want to view customized VLAN names on the on-premises management console, [define the VLAN names](../how-to-manage-the-on-premises-management-console.md#define-vlan-names) there as well.
-### Operational engine
+**To configure VLAN names on an OT network sensor:**
-The Operational engine detects operational incidents or malfunctioning entities.
+1. Sign in to your OT sensor as an **Admin** user.
-Example scenario:
-*"Device is Suspected to be Disconnected (Unresponsive)"* alert. This alert is raised when a device isn't responding to any kind of request for a predefined period. This alert might indicate a device shutdown, disconnection, or malfunction.
+1. Select **System Settings** and then, under **Network monitoring**, select **VLAN Naming**.
-### Enable and disable engines
+1. In the **VLAN naming** pane that appears, enter a VLAN ID and unique VLAN name. VLAN names can contain up to 50 ASCII characters.
-When you disable a policy engine, information that the engine generates won't be available to the sensor. For example, if you disable the Anomaly engine, you won't receive alerts on network anomalies. If you created a forwarding rule, anomalies that the engine learns won't be sent. To enable or disable a policy engine, select **Enabled** or **Disabled** for the specific engine.
+1. Select **+ Add VLAN** to customize another VLAN, and **Save** when you're done.
+1. **For Cisco switches**: Add the `monitor session 1 destination interface XX/XX encapsulation dot1q` command to the SPAN port configuration, where *XX/XX* is the name and number of the port.
## Configure DHCP address ranges
-Your network might consist of both static and dynamic IP addresses. Typically, static addresses are found on OT networks through historians, controllers, and network infrastructure devices such as switches and routers. Dynamic IP allocation is typically implemented on guest networks with laptops, PCs, smartphones, and other portable equipment (using Wi-Fi or LAN physical connections in different locations).
-
-If you're working with dynamic networks, you handle IP address changes that occur when new IP addresses are assigned. You do this by defining DHCP address ranges.
-
-Changes might happen, for example, when a DHCP server assigns IP addresses.
-
-Defining dynamic IP addresses on each sensor enables comprehensive, transparent support in instances of IP address changes. This activity ensures comprehensive reporting for each unique device.
+Your OT network might consist of both static and dynamic IP addresses.
-The sensor console presents the most current IP address associated with the device and indicates which devices are dynamic. For example:
+- **Static addresses** are typically found on OT networks through historians, controllers, and network infrastructure devices such as switches and routers.
+- **Dynamic IP allocation** is typically implemented on guest networks with laptops, PCs, smartphones, and other portable equipment, using Wi-Fi or LAN physical connections in different locations.
-- The Data Mining report and Device Inventory report consolidate all activity learned from the device as one entity, regardless of the IP address changes. These reports indicate which addresses were defined as DHCP addresses.
+If you're working with dynamic networks, you'll need to handle IP addresses changes as they occur, by defining DHCP address ranges on each OT network sensor. When an IP address is defined as a DHCP address, Defender for IoT identifies any activity happening on the same device, regardless of IP address changes.
- :::image type="content" source="media/how-to-control-what-traffic-is-monitored/populated-device-inventory-screen-v2.png" alt-text="Screenshot that shows device inventory." lightbox="media/how-to-control-what-traffic-is-monitored/populated-device-inventory-screen-v2.png":::
+**To define DHCP address ranges**:
-- The **Device Properties** window indicates if the device was defined as a DHCP device.
+1. Sign into your OT sensor and select **System settings** > **Network monitoring** > **DHCP Ranges**.
-**To set a DHCP address range:**
+1. Do one of the following:
-1. On the side menu, select **System Settings** > **Network monitoring** > **DHCP Ranges**.
+ - To add a single range, select **+ Add range** and enter the IP address range and an optional name for your range.
+ - To add multiple ranges, create a .CSV file with columns for the *From*, *To*, and *Name* data for each of your ranges. Select **Import** to import the file to your OT sensor. Range values imported from a .CSV file overwrite any range data currently configured for your sensor.
+ - To export currently configured ranges to a .CSV file, select **Export**.
+ - To clear all currently configured ranges, select **Clear all**.
+
+ Range names can have up to 256 characters.
-2. Define a new range by setting **From** and **To** values.
+1. Select **Save** to save your changes.
-3. Optionally: Define the range name, up to 256 characters.
+## Configure traffic filters (advanced)
-4. To export the ranges to a CSV file, select **Export**.
+To reduce alert fatigue and focus your network monitoring on high priority traffic, you may decide to filter the traffic that streams into Defender for IoT at the source. Capture filters are configured via the OT sensor CLI, and allow you to block high-bandwidth traffic at the hardware layer, optimizing both appliance performance and resource usage.
-5. To manually add multiple ranges from a CSV file, select **Import** and then select the file.
-
- > [!NOTE]
- > The ranges that you import from a CSV file overwrite the existing range settings.
-
-6. Select **Save**.
+For more information, see:
+- [Defender for IoT CLI users and access](references-work-with-defender-for-iot-cli-commands.md)
+- [Traffic capture filters](cli-ot-sensor.md#traffic-capture-filters)
## Next steps
-For more information, see:
+> [!div class="step-by-step"]
+> [« Configure proxy settings on an OT sensor](connect-sensors.md)
-- [Configure active monitoring for OT networks](configure-active-monitoring.md)-- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)-- [Investigate sensor detections in the device map](how-to-work-with-the-sensor-device-map.md)
+> [!div class="step-by-step"]
+> [Verify and update your detected device inventory ┬╗](ot-deploy/update-device-inventory.md)
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
This article describes how to configure your OT sensor or on-premises management
- You'll also need to define SMTP settings on the OT sensor or on-premises management console.
- For more information, see [Configure SMTP settings on an OT sensor](how-to-manage-individual-sensors.md#configure-smtp-settings) and [Configure SMTP settings on an on-premises management console](how-to-manage-the-on-premises-management-console.md#mail-server-settings).
+ For more information, see [Configure SMTP mail server settings on an OT sensor](how-to-manage-individual-sensors.md#configure-smtp-mail-server-settings) and [Configure SMTP mail server settings on the on-premises management console](how-to-manage-the-on-premises-management-console.md#configure-smtp-mail-server-settings).
## Create forwarding rules on an OT sensor
defender-for-iot How To Import Device Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-import-device-information.md
Title: Import device information
-description: Defender for IoT sensors monitor and analyze mirrored traffic. In these cases, you might want to import data to enrich information on devices already detected.
Previously updated : 02/01/2022
+ Title: Import extra data for detected OT devices - Microsoft Defender for IoT
+description: Learn how to manually enhance the device data automatically detected by your Microsoft Defender for IoT OT sensor with extra, imported data.
Last updated : 01/24/2023
-# Import device information to a sensor
+# Import extra data for detected OT devices
-Sensors monitor and analyze device traffic. In some cases, because of network policies, some information might not be transmitted. In this case, you can import data and add it to device information that's already detected. You have two options for import:
+OT networks sensors automatically monitor and analyze detected device traffic. In some cases, your organization's network policies may prevent some device data from being ingested to Microsoft Defender for IoT.
-- **Import from the device map**: Import device names, type, group, or Purdue layer to the device map.-- **Import from import settings**: Import device IP address, operating system, patch level, or authorization status to the device map.
+This article describes how you can manually import the missing data to your OT sensor and add it to the device data already detected.
-## Import from the device map
+## Prerequisites
-Before you start, note that:
+Before performing the procedures in this article, you must have:
-- **Names**: Names can be up to 30 characters.-- **Device Group**: Create a new group of up to 30 characters. -- **Type** or **Purdue Layer**: Use the options that appear in the device properties when you select a device. -- To avoid conflicts, don't import the data that you exported from one sensor to another.
+- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](ot-deploy/activate-deploy-sensor.md), with device data ingested.
-Import data as follows:
+- Access to your OT network sensor as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-1. In Defender for IoT, select **Device map**.
-2. Select **Export Devices**. An extensive range of information appears in the exported file. This information includes protocols that the device uses and the device authorization status.
-4. In the CSV file, you should change only the device name, type, group, and Purdue layer. Use capitalization standards shown in the exported file. For example, for the Purdue layer, use all first-letter capitalization.
-1. Save the file.
-1. Select **Import Devices**. Then select the CSV file that you want to import.
+- An understanding of the extra device data you want to import. Use that understanding to choose one of the following import methods:
-## Import from import settings
+ - **Import data from the device map** to import device names, types, groups, or Purdue layer
+ - **Import data from system settings** to import device IP addresses, operating systems, patch levels, or authorization statuses
-1. Download the [Devices settings file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/devices_info_2.2.8%20and%20up.xlsx).
-1. In the **Devices** sheet, enter the device IP address.
-1. In **Device Type**, select the type from the dropdown list.
-1. In **Last Update**, specify the data in YYYY-MM-DD format.
-1. In **System settings**, under **Import settings**, select **Device Information** to import. Select **Add** and upload the CSV file that you prepared.
+> [!TIP]
+> A device's authorization status affects the alerts that are triggered by the OT sensor for the selected device. You'll receive alerts for any devices *not* listed as authorized devices, as they'll be considered to be unauthorized.
+## Import data from the OT sensor device map
-## Import authorization status
+**To import device names, types, groups, or Purdue layers**:
-1. Download the [Authorization file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/authorized_devices%20-%20example.csv) and save as a CSV file.
-1. In the authorized_devices sheet, specify the device IP address.
-1. Specify the authorized device name. Make sure that names are accurate. Names given to the devices in the imported list overwrite names shown in the device map.
-1. In **System settings**, under **Import settings**, select **Authorized devices** to import. Select **Add** and upload the CSV file that you prepared.
+1. Sign into your OT sensor and select **Device map** > **Export Devices** to export the device data already detected by your OT sensor.
-When the information is imported, you receive alerts about unauthorized devices for all the devices that don't appear on this list.
+1. Open the downloaded .CSV file for editing and modify *only* the following data, as needed:
+ - **Name**. Maximum length: 30 characters
+ - **Type**. Access the Defender for IoT [device settings file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/devices_info_2.2.8%20and%20up.xlsx) and use one of the options listed in the **Devices type** tab
+ - **Group**. Maximum length: 30 characters
+ - **Purdue layer**. Enter one of the following: **Enterprise**, **Supervisory**, or **Process Control**
-## Next steps
+ Make sure to use capitalization standards already in use in the downloaded file. For example, in the **Purdue Layer** column, use *Title Caps*.
+
+ > [!IMPORTANT]
+ > Make sure that you don't import data to your OT sensor that you've exported from a different sensor.
+
+1. When you're done, save your file to a location accessible from your OT sensor.
+
+1. On your OT sensor, in the **Device map** page, select **Import Devices** and select your modified .CSV file.
+
+Your device data is updated.
+
+## Import data from the OT sensor system settings
+
+**To import device IP addresses, operating systems, or patch levels**:
+
+1. Download the Defender for IoT [device settings file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/devices_info_2.2.8%20and%20up.xlsx) and open it for editing.
+
+1. In the downloaded file, enter the following details for each device:
+
+ - **IP Address**. Enter the device's IP address.
+ - **Device Type**. Enter one of the device types listed on the **Devices type** sheet.
+ - **Last Update**. Enter the date that the device was last updated, in `YYYY-MM-DD` format.
+
+1. Sign into your OT sensor and select **System settings > Import settings > Device information**.
-For more information, see:
+1. In the **Device information** pane, select **+ Import file** and then select your edited .CSV file.
-- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
+1. Select **Close** to save your changes.
+
+**To import device authorization status**:
+
+> [!IMPORTANT]
+> After importing device authorization status, any devices *not* included in the import list are newly defined as not-authorized, and you'll start to receive new alerts about any traffic on each of these devices.
+
+1. Download the Defender for IoT [device authorization file](https://download.microsoft.com/download/8/2/3/823c55c4-7659-4236-bfda-cc2427be2cee/CSS/authorized_devices%20-%20example.csv) and open it for editing.
+
+1. In the downloaded file, list IP addresses and names for any devices you want to list as authorized devices.
+
+ Make sure that your names are accurate. Names imported from a .CSV file overwrite any names already shown in the OT sensor's device map.
+
+1. Sign into your OT sensor and select **System settings > Import settings > Authorized devices**.
+
+1. In the **Authorized devices** pane, select **+ Import File** and then select your edited .CSV file.
+
+1. Select **Close** to save your changes.
+
+## Next steps
-- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md) and [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
Title: Manage your OT device inventory from an on-premises management console
-description: Learn how to view and manage OT devices (assets) from the Device inventory page on an on-premises management console.
+description: Learn how to view and manage OT devices (assets) from the Device Inventory page on an on-premises management console.
Previously updated : 07/12/2022 Last updated : 01/26/2023
For more information, see [What is a Defender for IoT committed device?](archite
> Alternately, view your device inventory from a [the Azure portal](how-to-manage-device-inventory-for-organizations.md), or from an [OT sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md). >
+## Prerequisites
+
+Before performing the procedures in this article, make sure that you have:
+
+- An on-premises management console [installed](ot-deploy/install-software-on-premises-management-console.md), [activated, and configured](ot-deploy/activate-deploy-management.md). To view devices by zone, make sure that you've [configured sites and zones](ot-deploy/sites-and-zones-on-premises.md) on on the on-premises management console.
+
+- One or more OT sensors [installed](ot-deploy/install-software-ot-sensor.md), [activated, configured](ot-deploy/activate-deploy-sensor.md), and [connected to your on-premises management console](ot-deploy/connect-sensors-to-management.md). To view devices per zone, make sure that each sensor is assigned to a specific zone.
+
+- Access to the on-premises management console with one of the following [user roles](roles-on-premises.md):
+
+ - **To view devices the on-premises management console**, sign in as an *Admin*, *Security Analyst*, or *Viewer* user.
+
+ - **To export or import data**, sign in as an *Admin* or *Security Analyst* user.
+
+ - **To use the CLI**, access the CLI via SSH/Telnet as a [privileged user](references-work-with-defender-for-iot-cli-commands.md).
+ ## View the device inventory To view detected devices in the **Device Inventory** page in an on-premises management console, sign-in to your on-premises management console, and then select **Device Inventory**.
Use any of the following options to modify or filter the devices shown:
For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data).
+### View device inventory by zone
+
+To view alerts from connected OT sensors for a specific zone, use the **Site Management** page on an on-premises management console.
+
+1. Sign into your on-premises management console and select **Site Management**.
+
+1. Locate the site and zone you want to view, using the filtering options at the top as needed:
+
+ - **Connectivity**: Select to view only all OT sensors, or only connected / disconnected sensors only.
+ - **Upgrade Status**: Select to view all OT sensors, or only those with a specific [software update status](update-ot-software.md#update-an-on-premises-management-console).
+ - **Business Unit**: Select to view all OT sensors, or only those from a [specific business unit](best-practices/plan-corporate-monitoring.md#plan-ot-sites-and-zones).
+ - **Region**: Select to view all OT sensors, or only those from a [specific region](best-practices/plan-corporate-monitoring.md#plan-ot-sites-and-zones).
+
+1. Select **View device inventory** for a specific OT sensor to jump to the device inventory for that OT sensor.
+ ## Export the device inventory to CSV Export your device inventory to a CSV file to manage or share data outside of the OT sensor.
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
Use any of the following options to modify or filter the devices shown:
For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data). +
+> [!NOTE]
+> If your OT sensors detect multiple devices in the same zone with the same IP or MAC address, those devices are automatically merged and identified as a single, unique device. Devices that have different IP addresses, but the same MAC address, are not merged, and continue to be listed as unique devices.
+>
+> Merged devices are listed only once in the **Device inventory** page. For more information, see [Separating zones for recurring IP ranges](best-practices/plan-corporate-monitoring.md#separating-zones-for-recurring-ip-ranges).
+ ### View full device details To view full details about a specific device, select the device row. Initial details are shown in a pane on the right, where you can also select **View full details** to open the device details page and drill down more.
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
Title: Manage OT sensors from the sensor console - Microsoft Defender for IoT
-description: Learn how to manage individual Microsoft Defender for IoT OT network sensors directly from the sensor's console.
Previously updated : 11/28/2022
+ Title: Maintain Defender for IoT OT network sensors from the GUI - Microsoft Defender for IoT
+description: Learn how to perform maintenance activities on individual OT network sensors using the OT sensor console.
Last updated : 03/09/2023
-# Manage individual sensors
+# Maintain OT network sensors from the sensor console
-This article describes how to manage individual sensors, such as managing activation files, certificates, backups, and more.
+This article describes extra OT sensor maintenance activities that you might perform outside of a larger deployment process.
-You can also perform some management tasks for multiple sensors simultaneously from the Azure portal or an on-premises management console. For more information, see [Next steps](#next-steps).
+OT sensors can also be maintained from the OT sensor [CLI](cli-ot-sensor.md), the [Azure portal](how-to-manage-sensors-on-the-cloud.md), and an [on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md).
[!INCLUDE [caution do not use manual configurations](includes/caution-manual-configurations.md)]
-## View overall sensor status
+## Prerequisites
-When you sign into your sensor, the first page shown is the **Overview** page.
+Before performing the procedures in this article, make sure that you have:
+
+- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated](ot-deploy/activate-deploy-sensor.md), and [onboarded](onboard-sensors.md) to Defender for IoT in the Azure portal.
+
+- Access to the OT sensor as an **Admin** user. Selected procedures and CLI access also requires a privileged user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+- To download software for OT sensors, you'll need access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user.
+
+- An [SSL/TLS certificate prepared](ot-deploy/create-ssl-certificates.md) if you need to update your sensor's certificate.
+
+## View overall OT sensor status
+
+When you sign into your OT sensor, the first page shown is the **Overview** page.
For example:
Select the link in each widget to drill down for more information in your sensor
### Validate connectivity status
-Verify that your sensor is successfully connected to the Azure portal directly from the sensor's **Overview** page.
+Verify that your OT sensor is successfully connected to the Azure portal directly from the OT sensor's **Overview** page.
If there are any connection issues, a disconnection message is shown in the **General Settings** area on the **Overview** page, and a **Service connection error** warning appears at the top of the page in the :::image type="icon" source="media/how-to-manage-individual-sensors/bell-icon.png" border="false"::: **System Messages** area. For example: :::image type="content" source="media/how-to-manage-individual-sensors/connectivity-status.png" alt-text="Screenshot of a sensor page showing the connectivity status as disconnected." lightbox="media/how-to-manage-individual-sensors/connectivity-status.png":::
-1. Find more information about the issue by hovering over the :::image type="icon" source="media/how-to-manage-individual-sensors/information-icon.png" border="false"::: information icon. For example:
+Find more information about the issue by hovering over the :::image type="icon" source="media/how-to-manage-individual-sensors/information-icon.png" border="false"::: information icon. For example:
- :::image type="content" source="media/how-to-manage-individual-sensors/connectivity-message.png" alt-text="Screenshot of a connectivity error message." lightbox="media/how-to-manage-individual-sensors/connectivity-message.png":::
-1. Take action by selecting the **Learn more** option under :::image type="icon" source="media/how-to-manage-individual-sensors/bell-icon.png" border="false"::: **System Messages**. For example:
+Take action by selecting the **Learn more** option under :::image type="icon" source="media/how-to-manage-individual-sensors/bell-icon.png" border="false"::: **System Messages**. For example:
- :::image type="content" source="media/how-to-manage-individual-sensors/system-messages.png" alt-text="Screenshot of the system messages pane." lightbox="media/how-to-manage-individual-sensors/system-messages.png":::
## Download software for OT sensors
For more information, see [Update Defender for IoT OT monitoring software](updat
## Upload a new activation file
-Each OT sensor is onboarded as a cloud-connected or locally-managed OT sensor and activated using a unique activation file. For cloud-connected sensors, the activation file is used to ensure the connection between the sensor and Azure.
+Each OT sensor is onboarded as a cloud-connected or locally managed OT sensor and activated using a unique activation file. For cloud-connected sensors, the activation file is used to ensure the connection between the sensor and Azure.
-You'll need to upload a new activation file to your senor if you want to switch sensor management modes, such as moving from a locally-managed sensor to a cloud-connected sensor. Uploading a new activation file to your sensor includes deleting your sensor from the Azure portal and onboarding it again.
+You'll need to upload a new activation file to your sensor if you want to switch sensor management modes, such as moving from a locally managed sensor to a cloud-connected sensor, or if you're [updating from a legacy software version](update-legacy-ot-software.md#update-legacy-ot-sensor-software). Uploading a new activation file to your sensor includes deleting your sensor from the Azure portal and onboarding it again.
**To add a new activation file:** 1. In [Defender for IoT on the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started) > **Sites and sensors**, locate and [delete](how-to-manage-sensors-on-the-cloud.md#sensor-maintenance-and-troubleshooting) your OT sensor.
-1. Select **Onboard OT sensor > OT** to onboard the sensor again from scratch. For more information, see [Onboard OT sensors](onboard-sensors.md)
+1. Select **Onboard OT sensor > OT** to onboard the sensor again from scratch. For more information, see [Onboard OT sensors](onboard-sensors.md).
-1. On the **sites and sensors** page locate the sensor you just added.
+1. On the **Sites and sensors** page, locate the sensor you just added.
1. Select the three dots (...) on the sensor's row and select **Download activation file**. Save the file in a location accessible to your sensor.
You'll receive an error message if the activation file couldn't be uploaded. The
- **The sensor can't connect to the internet:** Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that the required endpoints are allowed in the firewall and/or proxy.
- For OT sensors version 22.x, download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**. For sensors with earlier versions, see [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
--- **The activation file is valid but Defender for IoT rejected it:** If you can't resolve this problem, you can download another activation from the **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support.-
-## Manage certificates
-
-Following sensor installation, a local self-signed certificate is generated and used to access the sensor web application. When logging in to the sensor for the first time, Administrator users are prompted to provide an SSL/TLS certificate.
-
-Sensor Administrators may be required to update certificates that were uploaded after initial login. This may happen, for example, if a certificate expired.
-
-**To update a certificate:**
-
-1. Select **System Settings** and then select **Basic**.
-
-1. Select **SSL/TLS Certificate.**
-
- :::image type="content" source="media/how-to-manage-individual-sensors/certificate-upload.png" alt-text="Upload a certificate":::
-
-1. In the SSL/TLS Certificates dialog box, delete the existing certificate and add a new one.
-
- - Add a certificate name.
- - Upload a CRT file and key file.
- - Upload a PEM file if necessary.
-
-If the upload fails, contact your security or IT administrator, or review the information in [Deploy SSL/TLS certificates on OT appliances](how-to-deploy-certificates.md).
-
-**To change the certificate validation setting:**
-
-1. Enable or disable the **Enable Certificate Validation** toggle. If the option is enabled and validation fails, communication between relevant components is halted, and a validation error is presented in the console. If disabled, certificate validation is not carried out. See [Verify CRL server access](how-to-deploy-certificates.md#verify-crl-server-access) for more information.
-
-1. Select **Save**.
-
-For more information about first-time certificate upload, see,
-[First-time sign-in and activation checklist](how-to-activate-and-set-up-your-sensor.md#first-time-sign-in-and-activation-checklist)
-
-## Connect a sensor to the management console
-
-This section describes how to ensure connection between the sensor and the on-premises management console. You need to do this if you're working in an air-gapped network and want to send device and alert information to the management console from the sensor. This connection also allows the management console to push system settings to the sensor and perform other management tasks on the sensor.
-
-**To connect:**
-
-1. Sign in to the on-premises management console.
-
-1. Select **System Settings**.
-
-1. In the **Sensor Setup ΓÇô Connection String** section, copy the automatically generated connection string.
-
- :::image type="content" source="media/how-to-manage-individual-sensors/connection-string-screen.png" alt-text="Screenshot of the Connection string screen.":::
-
-1. Sign in to the sensor console.
-
-1. On the left pane, select **System Settings**.
-
-1. Select **Management Console Connection**.
-
- :::image type="content" source="media/how-to-manage-individual-sensors/management-console-connection-screen.png" alt-text="Screenshot of the Management Console Connection dialog box.":::
-
-1. Paste the connection string in the **Connection string** box and select **Connect**.
-
-1. In the on-premises management console, in the **Site Management** window, assign the sensor to a site and zone.
-
-Continue with additional settings, such as [adding users](how-to-create-and-manage-users.md), [setting up an SMTP server](how-to-manage-individual-sensors.md#configure-smtp-settings), [forwarding alert rules](how-to-forward-alert-information-to-partners.md), and more. For more information, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md).
-
-## Change the name of a sensor
-
-You can change the name of your sensor console. The new name will appear in:
+ For OT sensors version 22.x, download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**. For sensors with earlier versions, see [Sensor access to Azure portal](networking-requirements.md#sensor-access-to-azure-portal).
-- The sensor console web browser-- Various console windows-- Troubleshooting logs-- The Sites and sensors page in the Defender for IoT portal on Azure.
+- **The activation file is valid but Defender for IoT rejected it:** If you can't resolve this problem, you can download another activation from the **Sites and sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support.
-The process for changing sensor names is the same for locally managed sensors and cloud-connected sensors.
+## Manage SSL/TLS certificates
-The sensor name is defined by the name assigned during the registration. The name is included in the activation file that you uploaded when signing in for the first time. To change the name of the sensor, you need to upload a new activation file.
+If you're working with a production environment, you'd [deployed a CA-signed SSL/TLS certificate](ot-deploy/activate-deploy-sensor.md#deploy-an-ssltls-certificate) as part of your OT sensor deployment. We recommend using self-signed certificates only for testing purposes.
-**To change the name:**
+The following procedures describe how to deploy updated SSL/TLS certificates, such as if the certificate has expired.
-1. In the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started), go to the Sites and sensors page.
-
-1. Delete the sensor from the page.
-
-1. Register with the new name by selecting **Set up OT/ICS Security** from the Getting Started page.
-
-1. Download the new activation file.
-
-1. Sign in to the Defender for IoT sensor console.
-
-1. In the sensor console, select **System settings** > **Sensor management** and then select
-**Subscription & Activation Mode**.
-
-1. Select **Upload** and select the file you saved.
-
-1. Select **Activate**.
-
-## Update the sensor network configuration
-
-The sensor network configuration was defined during the sensor installation. You can change configuration parameters. You can also set up a proxy configuration.
+> [!TIP]
+> You can also [import the certificate to your OT sensor using CLI commands](references-work-with-defender-for-iot-cli-commands.md#tlsssl-certificate-commands).
+>
-If you create a new IP address, you might be required to sign in again.
+# [Deploy a CA-signed certificate](#tab/ca-signed)
-**To change the configuration:**
+**To deploy a CA-signed SSL/TLS certificate:**
-1. On the side menu, select **System Settings**.
+1. Sign into your OT sensor and select **System Settings** > **Basic** > **SSL/TLS Certificate**.
-1. In the **System Settings** window, select **Network**.
+1. In the **SSL/TLS Certificates** pane, select the **Import trusted CA certificate (recommended)** option. For example:
-1. Set the parameters:
+ :::image type="content" source="media/how-to-deploy-certificates/recommended-ssl.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/recommended-ssl.png":::
- | Parameter | Description |
- |--|--|
- | IP address | The sensor IP address |
- | Subnet mask | The mask address |
- | Default gateway | The default gateway address |
- | DNS | The DNS server address |
- | Hostname | The sensor hostname |
- | Proxy | Proxy host and port name |
+1. Enter the following parameters:
-1. Select **Save**.
+ | Parameter | Description |
+ |||
+ | **Certificate Name** | Enter your certificate name. |
+ | **Passphrase** - *Optional* | Enter a passphrase. |
+ | **Private Key (KEY file)** | Upload a Private Key (KEY file). |
+ | **Certificate (CRT file)** | Upload a Certificate (CRT file). |
+ | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). |
-## Synchronize time zones on the sensor
+ Select **Use CRL (Certificate Revocation List) to check certificate status** to validate the certificate against a [CRL server](ot-deploy/create-ssl-certificates.md#verify-crl-server-access). The certificate is checked once during the import process.
-You can configure the sensor's time and region so that all the users see the same time and region.
+ If an upload fails, contact your security or IT administrator. For more information, see [SSL/TLS certificate requirements for on-premises resources](best-practices/certificate-requirements.md) and [Create SSL/TLS certificates for OT appliances](ot-deploy/create-ssl-certificates.md).
-| Parameter | Description |
-|--|--|
-| Timezone | The time zone definition for:<br />- Alerts<br />- Trends and statistics widgets<br />- Data mining reports<br /> -Risk assessment reports<br />- Attack vectors |
-| Date format | Select one of the following format options:<br />- dd/MM/yyyy HH:mm:ss<br />- MM/dd/yyyy HH:mm:ss<br />- yyyy/MM/dd HH:mm:ss |
-| Date and time | Displays the current date and local time in the format that you selected.<br />For example, if your actual location is America and New York, but the time zone is set to Europe and Berlin, the time is displayed according to Berlin local time. |
+1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**.
-**To configure the sensor time:**
+ If you've selected **Required** and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements).
-1. On the side menu, select **System settings** > **Basic**, > **Time & Region**.
+1. Select **Save** to save your certificate settings.
-1. Set the parameters and select **Save**.
+# [Create and deploy a self-signed certificate](#tab/windows)
-## Set up backup and restore files
+Each OT sensor is installed with a self-signed certificate that we recommend you use only for testing purposes. In production environments, we recommend that you always use a CA-signed certificate.
-System backup is performed automatically at 3:00 AM daily. The data is saved on a different disk in the sensor. The default location is `/var/cyberx/backups`. You can automatically transfer this file to the internal network.
+Self-signed certificates lead to a less secure environment, as the owner of the certificate can't be validated and the security of your system can't be maintained.
-For more information, see [On-premises backup file capacity](references-data-retention.md#on-premises-backup-file-capacity).
+To create a self-signed certificate, download the certificate file from your OT sensor and then use a certificate management platform to create the certificate files you'll need to upload back to the OT sensor.
-> [!NOTE]
->
-> - The backup and restore procedure can be performed between the same versions only.
-> - In some architectures, the backup is disabled. You can enable it in the `/var/cyberx/properties/backup.properties` file.
+**To create a self-signed certificate**:
-When you control a sensor by using the on-premises management console, you can use the sensor's backup schedule to collect these backups and store them on the management console or on an external backup server. For more information, see [Define sensor backup schedules](how-to-manage-sensors-from-the-on-premises-management-console.md#define-sensor-backup-schedules).
+1. Go to the OT sensor's IP address in a browser.
-**What is backed up**: Configurations and data.
-**What is not backed up**: PCAP files and logs. You can manually back up and restore PCAPs and logs. For more information, see [Upload and play PCAP files](#upload-and-play-pcap-files).
+When you're done, use the following procedures to validate your certificate files:
-Sensor backup files are automatically named through the following format: `<sensor name>-backup-version-<version>-<date>.tar`. An example is `Sensor_1-backup-version-2.6.0.102-2019-06-24_09:24:55.tar`.
+- [Verify CRL server access](ot-deploy/create-ssl-certificates.md#verify-crl-server-access)
+- [Import the SSL/TLS certificate to a trusted store](ot-deploy/create-ssl-certificates.md#import-the-ssltls-certificate-to-a-trusted-store)
+- [Test your SSL/TLS certificates](ot-deploy/create-ssl-certificates.md#test-your-ssltls-certificates)
-**To configure backup:**
+**To deploy a self-signed certificate**:
-- Sign in to an administrative account and enter `cyberx-xsense-system-backup`.
+1. Sign into your OT sensor and select **System Settings** > **Basic** > **SSL/TLS Certificate**.
-**To restore the latest backup file:**
+1. In the **SSL/TLS Certificates** pane, keep the default **Use Locally generated self-signed certificate (Not recommended)** option selected.
-- Sign in to an administrative account and enter `cyberx-xsense-system-restore`.
+1. Select the **Confirm** option to confirm the warning.
-**To save the backup to an external SMB server:**
+1. In the **Validation for on-premises management console certificates** area, select **Required** if SSL/TLS certificate validation is required. Otherwise, select **None**.
-1. Create a shared folder in the external SMB server.
+ If this option is toggled on and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements).
- Get the folder path, username, and password required to access the SMB server.
+1. Select **Save** to save your certificate settings.
-1. In the sensor, make a directory for the backups:
+
- - `sudo mkdir /<backup_folder_name_on_cyberx_server>`
+### Troubleshoot certificate upload errors
- - `sudo chmod 777 /<backup_folder_name_on_cyberx_server>/`
-1. Edit `fstab`:
+## Update the OT sensor network configuration
- - `sudo nano /etc/fstab`
+You'd configured your OT sensor network configuring during [installation](ot-deploy/install-software-ot-sensor.md). You may need to make changes as part of OT sensor maintenance, such as to modify network values or setting up a proxy configuration.
- - `add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifsrw,credentials=/etc/samba/user,vers=X.X,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0`
+**To update the OT sensor configuration:**
-1. Edit and create credentials to share for the SMB server:
+1. Sign into the OT sensor and select **System Settings** > **Basic** > **Sensor network settings**.
- `sudo nano /etc/samba/user`
+1. In the **Sensor network settings** pane, update the following details for your OT sensor as needed:
-1. Add:
+ - **IP address**. Changing the IP address may require users to sign into your OT sensor again.
+ - **Subnet mask**
+ - **Default gateway**
+ - **DNS**. Make sure to use the same hostname that's configured in your organization's DNS server.
+ - **Hostname** (optional)
- - `username=&gt:user name&lt:`
+1. Toggle the **Enable Proxy** option on or off if needed. If you're using a proxy, enter following values:
- - `password=<password>`
+ - **Proxy host**
+ - **Proxy port**
+ - **Proxy username** (optional)
+ - **Proxy password** (optional)
-1. Mount the directory:
+1. Select **Save** to save your changes.
- `sudo mount -a`
+### Turn off learning mode manually
-1. Configure a backup directory to the shared folder on the Defender for IoT sensor:ΓÇ»
+A Microsoft Defender for IoT OT network sensor starts monitoring your network automatically after your [first sign-in](ot-deploy/activate-deploy-sensor.md#sign-in-to-your-ot-sensor). Network devices start appearing in your [device inventory](device-inventory.md), and [alerts](alerts.md) are triggered for any security or operational incidents that occur in your network.
- - `sudo nano /var/cyberx/properties/backup.properties`
+Initially, this activity happens in *learning* mode, which instructs your OT sensor to learn your network's usual activity, including the devices and protocols in your network, and the regular file transfers that occur between specific devices. Any regularly detected activity becomes your network's [baseline traffic](ot-deploy/create-learned-baseline.md).
- - `set backup_directory_path to <backup_folder_name_on_cyberx_server>`
+This procedure describes how to turn off learning mode manually if you feel that the current alerts accurately reflect your network activity.
-### Restore sensors
+**To turn off learning mode**:
-You can restore a sensor from a backup file using the sensor console or the CLI.
+1. Sign into your OT network sensor and select **System settings > Network monitoring > Detection engines and network modeling**.
-For more information, see [CLI command reference from OT network sensors](cli-ot-sensor.md).
+1. Toggle off one or both of the following options:
-# [Restore from the sensor console](#tab/restore-from-sensor-console)
+ - **Learning**. Toggle off this option about two-six weeks after you've deployed your sensor, when you feel that the OT sensor detections accurately reflect your network activity.
-To restore a backup from the sensor console, the backup file must be accessible from the sensor.
+ - **Smart IT Learning**. Keep this option toggled on to keep the number of *nondeterministic* alerts and notifications low.
+
+ Nondeterministic behavior includes changes that are the result of normal IT activity, such as DNS and HTTP requests. Toggling off the **Smart IT Learning** option can trigger many false positive policy violation alerts.
-- **To download a backup file:**
+1. In the confirmation message, select **OK**, and then select **Close** to save your changes.
- 1. Access the sensor using an SFTP client.
+## Synchronize time zones on an OT sensor
- 1. Sign in to an administrative account and enter the sensor IP address.
+You may want to configure your OT sensor with a specific time zone so that all users see the same times regardless of the user's location.
- 1. Download the backup file from your chosen location and save it. The default location for system backup files is `/var/cyberx/backups`.
+Time zones are used in [alerts](how-to-view-alerts.md), [trends and statistics widgets](how-to-create-trends-and-statistics-reports.md), [data mining reports](how-to-create-data-mining-queries.md), [risk assessment reports](how-to-create-risk-assessment-reports.md), and [attack vector reports](how-to-create-attack-vector-reports.md).
-- **To restore the sensor**:
+**To configure an OT sensor's time zone**:
- 1. Sign in to the sensor console and go to **System settings** > **Sensor management** > **Backup & restore** > **Restore**. For example:
+1. Sign into your OT sensor and select **System settings** > **Basic** > **Time & Region**.
- :::image type="content" source="media/how-to-manage-individual-sensors/restore-sensor-screen.png" alt-text="Screenshot of Restore tab in sensor console.":::
+1. In the **Time & Region** pane, enter the following details:
- 1. Select **Browse** to select your downloaded backup file. The sensor will start to restore from the selected backup file.
+ - **Time Zone**: Select the time zone you want to use
- 1. When the restore process is complete, select **Close**.
+ - **Date Format**: Select the time and date format you want to use. Supported formats include:
-# [Restore the latest backup file by using the CLI](#tab/restore-using-cli)
+ - `dd/MM/yyyy HH:mm:ss`
+ - `MM/dd/yyyy HH:mm:ss`
+ - `yyyy/MM/dd HH:mm:ss`
-- Sign in to an administrative account and enter `cyberx-xsense-system-restore`.
+ The **Date & Time** field is automatically updated with the current time in the format you'd selected.
-
+1. Select **Save** to save your changes.
-## Configure SMTP settings
+## Configure SMTP mail server settings
-Define SMTP mail server settings for the sensor so that you configure the sensor to send data to other servers.
+Define SMTP mail server settings on your OT sensor so that you configure the OT sensor to send data to other servers and partner services.
You'll need an SMTP mail server configured to enable email alerts about disconnected sensors, failed sensor backup retrievals, and SPAN monitoring port failures from the on-premises management console, and to set up mail forwarding and configure [forwarding alert rules](how-to-forward-alert-information-to-partners.md).
You'll need an SMTP mail server configured to enable email alerts about disconne
Make sure you can reach the SMTP server from the [sensor's management port](./best-practices/understand-network-architecture.md).
-**To configure an SMTP server on your sensor**:
+**To configure an SMTP server on your OT sensor**:
-1. Sign in to the sensor as an **Admin** user and select **System settings** > **Integrations** > **Mail server**.
+1. Sign in to the OT sensor and select **System settings** > **Integrations** > **Mail server**.
1. In the **Edit Mail Server Configuration** pane that appears, define the values for your SMTP server as follows:
Make sure you can reach the SMTP server from the [sensor's management port](./be
1. Select **Save** when you're done.
-## Forward sensor failure alerts
-
-You can forward alerts to third parties to provide details about:
--- Disconnected sensors--- Remote backup failures-
-This information is sent when you create a forwarding rule for system notifications.
+## Upload and play PCAP files
-> [!NOTE]
-> Administrators can send system notifications.
+When troubleshooting your OT sensor, you may want to examine data recorded by a specific PCAP file. To do so, you can upload a PCAP file to your OT sensor and replay the data recorded.
-To send notifications:
+The **Play PCAP** option is enabled by default in the sensor console's settings.
-1. Sign in to the on-premises management console.
-1. Select **Forwarding** from the side menu.
-1. Create a forwarding rule.
-1. Select **Report System Notifications**.
+Maximum size for uploaded files is 2 GB.
-For more information about forwarding rules, see [Forward alert information](how-to-forward-alert-information-to-partners.md).
+**To show the PCAP player in your sensor console**:
-## Upload and play PCAP files
+1. On your sensor console, go to **System settings > Sensor management > Advanced Configurations**.
-When troubleshooting, you may want to examine data recorded by a specific PCAP file. To do so, you can upload a PCAP file to your sensor console and replay the data recorded.
+1. In the **Advanced configurations** pane, select the **Pcaps** category.
-The **Play PCAP** option is enabled by default in the sensor console's settings.
+1. In the configurations displayed, change `enabled=0` to `enabled=1`, and select **Save**.
-Maximum size for uploaded files is 2 GB.
+The **Play PCAP** option is now available in the sensor console's settings, under: **System settings > Basic > Play PCAP**.
**To upload and play a PCAP file**:
Maximum size for uploaded files is 2 GB.
1. In the **PCAP PLAYER** pane, select **Upload** and then navigate to and select the file or multiple files you want to upload.
-1. Select **Play** to play your PCAP file, or **Play All** to play all PCAP files currently loaded.
+ :::image type="content" source="media/how-to-manage-individual-sensors/upload-and-play-pcaps.png" alt-text="Screenshot of uploading PCAP files on the PCAP PLAYER pane in the sensor console." lightbox="media/how-to-manage-individual-sensors/upload-and-play-pcaps.png":::
+1. Select **Play** to play your PCAP file, or **Play All** to play all PCAP files currently loaded.
> [!TIP] > Select **Clear All** to clear the sensor of all PCAP files loaded.
-## Adjust system properties
-
-System properties control various operations and settings in the sensor. Editing or modifying them might damage the operation of the sensor console.
-
-Consult with [Microsoft Support](https://support.microsoft.com/) before you change your settings.
+## Turn off specific analytics engines
-To access system properties:
+By default, each OT network sensor analyzes ingested data using [built-in analytics engines](architecture.md#defender-for-iot-analytics-engines), and triggers alerts based on both real-time and prerecorded traffic.
-1. Sign in to the on-premises management console or the sensor.
+While we recommend that you keep all analytics engines on, you may want to turn off specific analytics engines on your OT sensors to limit the type of anomalies and risks monitored by that OT sensor.
-1. Select **System Settings**.
-
-1. Select **System Properties** from the **General** section.
-
-## Download a diagnostics log for support
+> [!IMPORTANT]
+> When you disable a policy engine, information that the engine generates won't be available to the sensor. For example, if you disable the Anomaly engine, you won't receive alerts on network anomalies. If you'd created a [forwarding alert rule](how-to-forward-alert-information-to-partners.md), anomalies that the engine learns won't be sent.
+>
-This procedure describes how to download a diagnostics log to send to support in connection with a specific support ticket.
+**To manage an OT sensor's analytics engines**:
-This feature is supported for the following sensor versions:
+1. Sign into your OT sensor and select **System settings > Network monitoring > Customization > Detection engines and network modeling**.
-- **22.1.1** - Download a diagnostic log from the sensor console-- **22.1.3** - For locally managed sensors, [upload a diagnostics log](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support) from the **Sites and sensors** page in the Azure portal. This file is automatically sent to support when you open a ticket on a cloud-connected sensor.
+1. In the **Detection engines and network modeling** pane, in the **Engines** area, toggle off one or more of the following engines:
+ - **Protocol Violation**
+ - **Policy Violation**
+ - **Malware**
+ - **Anomaly**
+ - **Operational**
-**To download a diagnostics log**:
+ Toggle the engine back on to start tracking related anomalies and activities again.
-1. On the sensor console, select **System settings** > **Backup & Restore** > **Backup**.
+ For more information, see [Defender for IoT analytics engines](architecture.md#defender-for-iot-analytics-engines).
-1. Under **Logs**, select **Support Ticket Diagnostics**, and then select **Export**.
+1. Select **Close** to save your changes.
- :::image type="content" source="media/release-notes/support-ticket-diagnostics.png" alt-text="Screenshot of the Backup & Restore pane showing the Support Ticket Diagnostics option." lightbox="media/release-notes/support-ticket-diagnostics.png":::
+**To manage analytics engines from an on-premises management console**:
-1. For a locally managed sensor, version 22.1.3 or higher, continue with [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support).
+1. Sign into your on-premises management console and select **System Settings**.
-## Retrieve forensics data stored on the sensor
+1. In the **Sensor Engine Configuration** section, select one or more OT sensors you want to apply settings for, and clear any of the following options:
-Use Defender for IoT data mining reports on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data are stored locally on OT sensors, for devices detected by that sensor:
+ - **Protocol Violation**
+ - **Policy Violation**
+ - **Malware**
+ - **Anomaly**
+ - **Operational**
-- Device data-- Alert data-- Alert PCAP files-- Event timeline data-- Log files
+1. Select **SAVE CHANGES** to save your changes.
-Each type of data has a different retention period and maximum capacity. For more information, see [Create data mining queries](how-to-create-data-mining-queries.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md).
+## Clear OT sensor data
-## Clearing sensor data
+If you need to relocate or erase your OT sensor, reset it to clear all detected or learned data on the OT sensor.
-In cases where the sensor needs to be relocated or erased, the sensor can be reset.
+After clearing data on a cloud-connected sensor:
-Clearing data deletes all detected or learned data on the sensor. After clearing data on a cloud connected sensor, cloud inventory will be updated accordingly. Additionally, some actions on the corresponding cloud alerts such as downloading PCAPs or learning alerts will not be supported.
+- The device inventory on the Azure portal is updated in parallel.
+- Some actions on corresponding alerts in the Azure portal are no longer supported, such as downloading PCAP files or learning alerts.
> [!NOTE] > Network settings such as IP/DNS/GATEWAY will not be changed by clearing system data. **To clear system data**:
-1. Sign in to the sensor as the *cyberx* user.
+1. Sign in to the OT sensor as the *cyberx* user. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
1. Select **Support** > **Clear data**.
A confirmation message appears that the action was successful. All learned data,
For more information, see: -- [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)-- [Connect your OT sensors to the cloud](connect-sensors.md)
+- [Manage sensors from the on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
- [Track sensor activity](how-to-track-sensor-activity.md)-- [Update OT system software](update-ot-software.md)-- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)-- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the sensor](how-to-troubleshoot-sensor.md)
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md
Title: Manage OT sensors from the on-premises management console
-description: Learn how to manage OT sensors from the on-premises management console, including updating sensor versions, pushing system settings to sensors, managing certificates, and enabling and disabling engines on sensors.
Previously updated : 06/02/2022
+description: Learn how to manage OT sensors from the on-premises management console, such as pushing system settings to OT sensors across your network.
Last updated : 03/09/2023 # Manage sensors from the on-premises management console
-This article describes how to manage OT sensors from an on-premises management console, such as pushing system settings to individual sensors, or enabling or disabling specific engines on your sensors.
+This article describes how you can manage OT sensors from an on-premises management console, such as pushing system settings to OT sensors across your network.
-For more information, see [Next steps](#next-steps).
+## Prerequisites
-## Push configurations
+To perform the procedures in this article, make sure you have:
-You can define various system settings and automatically apply them to sensors that are connected to the management console. This saves time and helps ensure streamlined settings across your enterprise sensors.
+- An on-premises management console [installed](ot-deploy/install-software-on-premises-management-console.md) and [activated](ot-deploy/activate-deploy-management.md)
-You can define the following sensor system settings from the management console:
+- One or more OT network sensors [installed](ot-deploy/install-software-ot-sensor.md), [activated](ot-deploy/activate-deploy-sensor.md), and [connected to your on-premises management console](ot-deploy/connect-sensors-to-management.md)
-- Mail server
+- Access to the on-premises management console as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-- SNMP MIB monitoring
+## Push system settings to OT sensors
-- Active Directory
+If you have an OT sensor already configured with system settings that you want to share across to other OT sensors, push those settings from the on-premises management console. Sharing system settings across OT sensors saves time and streamlines your settings across your system.
-- DNS settings--- Subnets
+Supported settings include:
+- Mail server settings
+- SNMP MIB monitoring settings
+- Active Directory settings
+- DNS reverse lookup settings
+- Subnet settings
- Port aliases
-**To apply system settings**:
-
-1. On the console's left pane, select **System Settings**.
-
-1. On the **Configure Sensors** pane, select one of the options.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/sensor-system-setting-options.png" alt-text="The system setting options for a sensor.":::
-
- The following example describes how to define mail server parameters for your enterprise sensors.
-
-1. Select **Mail Server**.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/edit-system-settings-screen.png" alt-text="Select your mail server from the System Settings screen.":::
-
-1. Select a sensor on the left.
-
-1. Set the mail server parameters and select **Duplicate**. Each item in the sensor tree appears with a check box next to it.
+**To push system settings across OT sensors**:
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/check-off-each-sensor.png" alt-text="Ensure the check boxes are selected for your sensors.":::
+1. Sign into your on-premises management console and select **System settings**.
-1. In the sensor tree, select the items to which you want to apply the configuration.
+1. Scroll down to view the **Configure Sensors** area and select the setting you want to push across your OT sensors.
-1. Select **Save**.
+1. In the **Edit ... Configuration** dialog, select the OT sensor you want to share settings *from*. The dialog shows the current settings defined for the selected sensor.
-## Understand sensor disconnection events
+1. Confirm that the current settings are the ones you want to share across your system, and then select **Duplicate**.
-The **Site Manager** window displays disconnection information if sensors disconnect from their assigned on-premises management console. The following sensor disconnection information is available:
+1. Select **Save** to save your changes.
-- "The on-premises management console canΓÇÖt process data received from the sensor."
+The selected settings are applied across all connected OT sensors.
-- "Times drift detected. The on-premises management console has been disconnected from sensor."
+## Monitor disconnected OT sensors
-- "Sensor not communicating with on-premises management console. Check network connectivity or certificate validation."
+If you're working with locally managed OT network sensors and on-premises management console, we recommend that you forward alerts about OT sensors that are disconnected from the on-premises management console to partner services.
+### View OT sensor connection statuses
-You can send alerts to third parties with information about disconnected sensors. For more information, see [Forward sensor failure alerts](how-to-manage-individual-sensors.md#forward-sensor-failure-alerts).
+Sign into the on-premises management console and select **Site Management** to check for any disconnected sensors.
-## Enable or disable sensors
+For example, you might see one of the following disconnection messages:
-Sensors are protected by Defender for IoT engines. You can enable or disable the engines for connected sensors.
+- **The on-premises management console canΓÇÖt process data received from the sensor.**
-| Engine | Description | Example scenario |
-|--|--|--|
-| Protocol violation engine | A protocol violation occurs when the packet structure or field values don't comply with the protocol specification. | "Illegal MODBUS Operation (Function Code Zero)" alert. This alert indicates that a primary device sent a request with function code 0 to a secondary device. This isn't allowed according to the protocol specification, and the secondary device might not handle the input correctly. |
-| Policy violation engine | A policy violation occurs with a deviation from baseline behavior defined in the learned or configured policy. | "Unauthorized HTTP User Agent" alert. This alert indicates that an application that wasn't learned or approved by the policy is used as an HTTP client on a device. This might be a new web browser or application on that device. |
-| Malware engine | The malware engine detects malicious network activity. | "Suspicion of Malicious Activity (Stuxnet)" alert. This alert indicates that the sensor found suspicious network activity known to be related to the Stuxnet malware, which is an advanced persistent threat aimed at industrial control and SCADA networks. |
-| Anomaly engine | The malware engine detects an anomaly in network behavior. | "Periodic Behavior in Communication Channel." This is a component that inspects network connections and finds periodic or cyclic behavior of data transmission, which is common in industrial networks. |
-| Operational engine | This engine detects operational incidents or malfunctioning entities. | `Device is Suspected to be Disconnected (Unresponsive)` alert. This alert triggered when a device isn't responding to any requests for a predefined period. It might indicate a device shutdown, disconnection, or malfunction.
+- **Times drift detected. The on-premises management console has been disconnected from sensor.**
-**To enable or disable engines for connected sensors:**
+- **Sensor not communicating with on-premises management console. Check network connectivity or certificate validation.**
-1. In the console's left pane, select **System Settings**.
-
-1. In the **Sensor Engine Configuration** section, select **Enable** or **Disable** for the engines.
-
-1. Select **SAVE CHANGES**.
-
- A red exclamation mark appears if there's a mismatch of enabled engines on one of your enterprise sensors. The engine might have been disabled directly from the sensor.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/red-exclamation-example.png" alt-text="Mismatch of enabled engines.":::
+> [!TIP]
+> You may want to send alerts about your OT sensor connection status on the on-premises management console to partner services.
+>
+> To do this, [create a forwarding alert rule](how-to-forward-alert-information-to-partners.md#create-forwarding-rules-on-an-on-premises-management-console) on your on-premises management console. In the **Create Forwarding Rule** dialog box, make sure to select **Report System Notifications**.
## Retrieve forensics data stored on the sensor
-Use Defender for IoT data mining reports on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data is stored locally on OT sensors, for devices detected by that sensor:
+Use Defender for IoT data mining reports on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data are stored locally on OT sensors, for devices detected by that sensor:
- Device data - Alert data
Use Defender for IoT data mining reports on an OT network sensor to retrieve for
- Event timeline data - Log files
-Each type of data has a different retention period and maximum capacity. For more information see [Create data mining queries](how-to-create-data-mining-queries.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md).
-
-## Define sensor backup schedules
-
-You can schedule sensor backups and perform on-demand sensor backups from the on-premises management console. This helps protect against hard drive failures and data loss.
--- What is backed up: Configurations and data.--- What isn't backed up: PCAP files and logs. You can manually back up and restore PCAPs and logs.-
-By default, sensors are automatically backed up at 3:00 AM daily. The backup schedule feature for sensors lets you collect these backups and store them on the on-premises management console or on an external backup server. Copying files from sensors to the on-premises management console happens over an encrypted channel.
--
-When the default sensor backup location is changed, the on-premises management console automatically retrieves the files from the new location on the sensor or an external location, provided that the console has permission to access the location.
-
-When the sensors aren't registered with the on-premises management console, the **Sensor Backup Schedule** dialog box indicates that no sensors are managed.
-
-The restore process is the same regardless of where the files are stored. For more information on how to restore a sensor, see [Restore sensors](how-to-manage-individual-sensors.md#restore-sensors).
-
-### Backup storage for sensors
-
-You can use the on-premises management console to maintain up to nine backups for each managed sensor, provided that the backed-up files don't exceed the maximum backup space that's allocated.
-
-The available space is calculated based on the management console model you're working with:
--- **Production model**: Default storage is 40 GB; limit is 100 GB.--- **Medium model**: Default storage is 20 GB; limit is 50 GB.--- **Laptop model**: Default storage is 10 GB; limit is 25 GB.--- **Thin model**: Default storage is 2 GB; limit is 4 GB.--- **Rugged model**: Default storage is 10 GB; limit is 25 GB.-
-The default allocation is displayed in the **Sensor Backup Schedule** dialog box.
--
-There's no storage limit when you're backing up to an external server. You must, however, define an upper allocation limit in the **Sensor Backup Schedule** > **Custom Path** field. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and _`.
-
-Here's information about exceeding allocation storage limits:
--- If you exceed the allocated storage space, the sensor isn't backed up.--- If you're backing up more than one sensor, the management console tries to retrieve sensor files for the managed sensors. --- If the retrieval from one sensor exceeds the limit, the management console tries to retrieve backup information from the next sensor.-
-When you exceed the retained number of backups defined, the oldest backed-up file is deleted to accommodate the new one.
-
-Sensor backup files are automatically named in the following format: `<sensor name>-backup-version-<version>-<date>.tar`. For example: `Sensor_1-backup-version-2.6.0.102-2019-06-24_09:24:55.tar`.
-
-**To back up sensors:**
-
-1. Select **Schedule Sensor Backup** from the **System Settings** window. Sensors that your on-premises management console manages appear in the **Sensor Backup Schedule** dialog box.
-
-1. Enable the **Collect Backups** toggle.
-
-1. Select a calendar interval, date, and time zone. The time format is based on a 24-hour clock. For example, enter 6:00 PM as **18:00**.
-
-1. In the **Backup Storage Allocation** field, enter the storage that you want to allocate for your backups. You're notified if you exceed the maximum space.
-
-1. In the **Retain Last** field, indicate the number of backups per sensor you want to retain. When the limit is exceeded, the oldest backup is deleted.
-
-1. Choose a backup location:
-
- - To back up to the on-premises management console, disable the **Custom Path** toggle. The default location is `/var/cyberx/sensor-backups`.
-
- - To back up to an external server, enable the **Custom Path** toggle and enter a location. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and, _`.
-
-1. Select **Save**.
-
-**To back up immediately:**
--- Select **Back Up Now**. The on-premises management console creates and collects sensor backup files.-
-### Receiving backup notifications for sensors
-
-The **Sensor Backup Schedule** dialog box and the backup log automatically list information about backup successes and failures.
--
-Failures might occur because:
--- No backup file is found.--- A file was found but can't be retrieved. --- There's a network connection failure.--- There's not enough room allocated to the on-premises management console to complete the backup. -
-You can send an email notification, syslog updates, and system notifications when a failure occurs. To do this, create a forwarding rule in **Forwarding**.
-
-### Save a sensor backup to an external SMB server
-
-**To set up an SMB server so you can save a sensor backup to an external drive:**
-
-1. Create a shared folder in the external SMB server.
-
-1. Get the folder path, username, and password required to access the SMB server.
-
-1. In Defender for IoT, make a directory for the backups:
-
- ```bash
- sudo mkdir /<backup_folder_name_on_server>
-
- sudo chmod 777 /<backup_folder_name_on_server>/
- ```
-
-1. Edit fstab:ΓÇ»
-
- ```bash
- sudo nano /etc/fstab
-
- add - //<server_IP>/<folder_path> /<backup_folder_name_on_cyberx_server> cifs rw,credentials=/etc/samba/user,vers=3.0,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0
- ```
-
-1. Edit or create credentials to share. These are the credentials for the SMB server:
-
- ```bash
- sudo nano /etc/samba/user
- ```
+Each type of data has a different retention period and maximum capacity. For more information, see [Create data mining queries](how-to-create-data-mining-queries.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md).
-1. Add:ΓÇ»
+### Turn off learning mode from your on-premises management console
- ```bash
- username=<user name>
+A Microsoft Defender for IoT OT network sensor starts monitoring your network automatically after your [first sign-in](ot-deploy/activate-deploy-sensor.md#sign-in-to-your-ot-sensor). Network devices start appearing in your [device inventory](device-inventory.md), and [alerts](alerts.md) are triggered for any security or operational incidents that occur in your network.
- password=<password>
- ```
+Initially, this activity happens in *learning* mode, which instructs your OT sensor to learn your network's usual activity, including the devices and protocols in your network, and the regular file transfers that occur between specific devices. Any regularly detected activity becomes your network's [baseline traffic](ot-deploy/create-learned-baseline.md).
-1. Mount the directory:
+This procedure describes how to turn off learning mode manually for all connected sensors if you feel that the current alerts accurately reflect your network activity.
- ```bash
- sudo mount -a
- ```
+**To turn off learning mode**:
-1. Configure a backup directory to the shared folder on the Defender for IoT sensor:ΓÇ»
+1. Sign into your on-premises management console and select **System Settings**.
- ```bash
- sudo nano /var/cyberx/properties/backup.properties
- ```
+1. In the **Sensor Engine Configuration** section, select one or more OT sensors you want to apply settings for, and clear the **Learning Mode** option.
-1. Set `Backup.shared_location` to `<backup_folder_name_on_cyberx_server>`.
+1. Select **SAVE CHANGES** to save your changes.
## Next steps For more information, see: - [Manage individual sensors](how-to-manage-individual-sensors.md)-- [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md) - [Connect your OT sensors to the cloud](connect-sensors.md) - [Track sensor activity](how-to-track-sensor-activity.md) - [Update OT system software](update-ot-software.md)
+- [Troubleshoot on-premises management console](how-to-troubleshoot-on-premises-management-console.md)
- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Maintain threat intelligence packages on OT network sensors](how-to-work-with-threat-intelligence-packages.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Manage threat intelligence packages on OT sensors](how-to-work-with-threat-intelligence-packages.md)
+- [Control the OT traffic monitored by Microsoft Defender for IoT](how-to-control-what-traffic-is-monitored.md)
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Details about each sensor are listed in the following columns:
## Site management options from the Azure portal
-When onboarding a new OT sensor to the Defender for IoT, you can add it to a new or existing site. When working with OT networks, organizing your sensors into sites allows you to manage your sensors more efficiently.
+When [onboarding a new OT sensor](onboard-sensors.md) to the Defender for IoT, you can add it to a new or existing site. When working with OT networks, organizing your sensors into sites allows you to manage your sensors more efficiently and align to a [Zero Trust strategy](concept-zero-trust.md) across your network.
Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**.
Use the options on the **Sites and sensor** page and a sensor details page to do
| :::image type="icon" source="medi). | |:::image type="icon" source="medi). | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. |
-|:::image type="icon" source="medi#update-legacy-ot-sensor-software). |
+|:::image type="icon" source="medi#update-legacy-ot-sensor-software). |
### Sensor deployment and access
Use the options on the **Sites and sensor** page and a sensor details page to do
| **Recover an on-premises management console password** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Download an activation file** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit a sensor zone** | For individual sensors only, from the **...** options menu or a sensor details page. <br><br>Select **Edit**, and then select a new zone from the **Zone** menu or select **Create new zone**. Select **Submit** to save your changes. |
-| **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up OT sensor health monitoring via SNMP](how-to-set-up-snmp-mib-monitoring.md).|
+| **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB health monitoring on an OT sensor](how-to-set-up-snmp-mib-monitoring.md).|
|:::image type="icon" source="medi#install-enterprise-iot-sensor-software). | |<a name="endpoint"></a> **Download endpoint details** (Public preview) | OT sensors only, with versions 22.x and higher only.<br><br>Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
You may need to reactivate an OT sensor because you want to:
In such cases, do the following steps: 1. [Delete your existing sensor](#sensor-management-options-from-the-azure-portal).
-1. [Onboard the sensor again](onboard-sensors.md#onboard-an-ot-sensor), registering it with any new settings.
+1. [Onboard the sensor again](onboard-sensors.md), registering it with any new settings.
1. [Upload your new activation file](how-to-manage-individual-sensors.md#upload-a-new-activation-file). ### Reactivate an OT sensor for upgrades to version 22.x from a legacy version
If you need to open a support ticket for a locally managed sensor, upload a diag
**To upload a diagnostics report**:
-1. Make sure you have the diagnostics report available for upload. For more information, see [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support).
+1. Make sure you have the diagnostics report available for upload. For more information, see [Download a diagnostics log for support](how-to-troubleshoot-sensor.md#download-a-diagnostics-log-for-support).
1. In Defender for IoT in the Azure portal, go to the **Sites and sensors** page and select the locally managed sensor that's related to your support ticket.
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Before performing the procedures in this article, make sure that you have:
- A [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user role for the Azure subscription that you'll be using for the integration
-## Calculate committed devices for OT monitoring
+### Calculate committed devices for OT monitoring
-If you're adding a plan with a monthly or annual commitment, you'll be asked to enter the number of [committed devices](billing.md#defender-for-iot-committed-devices), which are the approximate number of devices that will be monitored in your enterprise.
-
-We recommend that you make an initial estimate of your committed devices when onboarding your Defender for IoT plan. You can skip this procedure if you're adding a trial plan.
+If you're working with a monthly or annual commitment, you'll need to periodically update the number of *committed devices* in your plan as your network grows.
**To calculate committed devices:**:
We recommend that you make an initial estimate of your committed devices when on
- **Broadcast groups** - **Inactive devices**: Devices that have no network activity detected for more than 60 days
-After you've onboarded your plan, [set up a network sensor](tutorial-onboarding.md) and have [full visibility into your devices](how-to-manage-device-inventory-for-organizations.md), [edit a plan](#edit-a-plan-for-ot-networks) to update the number of committed devices as needed.
+For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
## Onboard a Defender for IoT plan for OT networks
For example, you may have more devices that require monitoring if you're increas
1. Make any of the following changes as needed: - Change your price plan from a trial to a monthly or annual commitment
- - Update the number of committed devices
+ - Update the number of [committed devices](#calculate-committed-devices-for-ot-monitoring)
- Update the number of sites (annual commitments only) 1. Select the **I accept the terms** option, and then select **Purchase**. 1. After any changes are made, make sure to reactivate your sensors. For more information, see [Reactivate an OT sensor](how-to-manage-sensors-on-the-cloud.md#reactivate-an-ot-sensor).
-1. If you have an on-premises management console, make sure to upload a new activation file, which reflects the changes made. For more information, see [Upload an activation file](how-to-manage-the-on-premises-management-console.md#upload-an-activation-file).
+1. If you have an on-premises management console, make sure to upload a new activation file, which reflects the changes made. For more information, see [Upload a new activation file](how-to-manage-the-on-premises-management-console.md#upload-a-new-activation-file).
Changes to your plan will take effect one hour after confirming the change. This change will appear on your next monthly statement, and you'll be charged based on the length of time each plan was in effect.
Billing changes will take effect one hour after cancellation of the previous sub
1. In the Azure portal, [onboard a new plan for OT networks](#onboard-a-defender-for-iot-plan-for-ot-networks) to the new subscription you want to use.
-1. Create a new activation file by [following the steps to onboard an OT sensor](onboard-sensors.md#onboard-an-ot-sensor).
+1. Create a new activation file by [following the steps to onboard an OT sensor](onboard-sensors.md).
- Replicate site and sensor hierarchy as is.
For more information, see:
- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) -- [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md)
+- [Activate and set up an on-premises management console](ot-deploy/activate-deploy-management.md)
- [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md) -- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
+- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
defender-for-iot How To Manage The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md
Title: Manage the on-premises management console
+ Title: Maintain the on-premises management console from the GUI - Microsoft Defender for IoT
description: Learn about on-premises management console options like backup and restore, defining the host name, and setting up a proxy to sensors. Last updated 06/02/2022
-# Manage the on-premises management console
+# Maintain the on-premises management console
-This article covers on-premises management console options like backup and restore, downloading committee device activation file, updating certificates, and setting up a proxy to sensors.
+This article describes extra on-premises management console activities that you might perform outside of a larger deployment process.
++
+## Prerequisites
+
+Before performing the procedures in this article, make sure that you have:
+
+- An on-premises management console [installed](ot-deploy/install-software-on-premises-management-console.md) and [activated](ot-deploy/activate-deploy-management.md).
+
+- Access to the on-premises management console as an **Admin** user. Selected procedures and CLI access also requires a privileged user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+- An [SSL/TLS certificate prepared](ot-deploy/create-ssl-certificates.md) if you need to update your sensor's certificate.
+
+- If you're adding a secondary NIC, you'll need access to the CLI as a [privileged user](roles-on-premises.md#default-privileged-on-premises-users).
## Download software for the on-premises management console
In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defe
- If you're updating your on-premises management console together with connected OT sensors, use the options in the **Sites and sensors** page > **Sensor update (Preview)** menu.
+## Add a secondary NIC after installation
-For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md#update-an-on-premises-management-console).
+Enhance security to your on-premises management console by adding a secondary NIC dedicated for attached sensors within an IP address range. When you use a secondary NIC, the first is dedicated for end-users, and the secondary supports the configuration of a gateway for routed networks.
-## Upload an activation file
+This procedure describes how to add a secondary NIC after [installing your on-premises management console](ot-deploy/install-software-on-premises-management-console.md).
-When you first sign in, an activation file for the on-premises management console is downloaded. This file contains the aggregate committed devices that are defined during the onboarding process. The list includes sensors associated with multiple subscriptions.
+**To add a secondary NIC**:
-After initial activation, the number of monitored devices might exceed the number of committed devices defined during onboarding. This event might happen, for example, if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices and the number of committed devices, a warning appears in the management console. If this event occurs, you should upload a new activation file.
+1. Sign into your on-premises management console via SSH to access the [CLI](../references-work-with-defender-for-iot-cli-commands.md), and run:
-**To upload an activation file:**
+ ```bash
+ sudo cyberx-management-network-reconfigure
+ ```
-1. Go to the Microsoft Defender for IoT **Plans and pricing** page.
-1. Select the **Download the activation file for the management console** tab. The activation file is downloaded.
+1. Enter the following responses to the following questions:
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/cloud_download_opm_activation_file.png" alt-text="Download the activation file.":::
+ :::image type="content" source="media/tutorial-install-components/network-reconfig-command.png" alt-text="Screenshot of the required answers to configure your appliance. ":::
- [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
+ | Parameters | Response to enter |
+ |--|--|
+ | **Management Network IP address** | `N` |
+ | **Subnet mask** | `N` |
+ | **DNS** | `N` |
+ | **Default gateway IP Address** | `N` |
+ | **Sensor monitoring interface** <br>Optional. Relevant when sensors are on a different network segment.| `Y`, and select a possible value |
+ | **An IP address for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
+ | **A subnet mask for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
+ | **Hostname** | Enter the hostname |
-1. Select **System Settings** from the management console.
-1. Select **Activation**.
-1. Select **Choose a File** and select the file that you saved.
+1. Review all choices and enter `Y` to accept the changes. The system reboots.
-## Manage certificates
+## Upload a new activation file
-When you first [install an on-premises management console](ot-deploy/install-software-on-premises-management-console.md), a local, self-signed certificate is generated and used to access the on-premises management console's UI. When signing into the on-premises management console for the first time, **Admin** users are prompted to provide an SSL/TLS certificate.
+You'd activated your on-premises management console as part of your deployment.
-If your certificate has expired, make sure to create a new one and upload it to your on-premises management console.
+You may need to reactivate your on-premises management console as part of maintenance procedures, such as if the total number of monitored devices exceeds the number of [committed devices you'd previously defined](how-to-manage-subscriptions.md#onboard-a-defender-for-iot-plan-for-ot-networks).
-For more information, see [Deploy SSL/TLS certificates on OT appliances](how-to-deploy-certificates.md).
+> [!TIP]
+> If your OT sensors detect more devices than you have defined as committed devices in your OT plan, your on-premises management console will show a warning message, prompting you to update the number of committed devices. Make sure that you update your activation file after updating the OT plan with the new number of committed devices.
+>
-Following on-premises management console installation, a local self-signed certificate is generated and used to access the web application. When logging in to the on-premises management console for the first time, Administrator users are prompted to provide an SSL/TLS certificate.
+**To upload a new activation file to your on-premises management console**:
-Administrators may be required to update certificates that were uploaded after initial login. This may happen for example if a certificate expired.
+1. In Defender for IoT on the Azure portal, select **Plans and pricing** > **Download on-premises management console activation file**.
-**To update a certificate:**
+ Save your downloaded file in a location that's accessible from the on-premises management console.
-1. Select **System Settings**.
+ [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-1. Select **SSL/TLS Certificates.**
+1. Sign into your on-premises management console and select **System Settings** > **Activation**.
- :::image type="content" source="media/how-to-manage-individual-sensors/certificate-upload.png" alt-text="Upload a certificate":::
+1. In the **Activation** dialog, select **CHOOSE FILE** and browse to the activation file you'd downloaded earlier.
-1. In the SSL/TLS Certificates dialog box, delete the existing certificate and add a new one.
+1. Select **Close** to save your changes.
- - Add a certificate name.
- - Upload a CRT file and key file.
- - Upload a PEM file if necessary.
+## Manage SSL/TLS certificates
-If the upload fails, contact your security or IT administrator, or review the information in [About Certificates](how-to-deploy-certificates.md).
+If you're working with a production environment, you'd deployed a [CA-signed SSL/TLS certificate](ot-deploy/activate-deploy-management.md#deploy-an-ssltls-certificate) as part of your on-premises management console deployment. We recommend using self-signed certificates only for testing purposes.
-**To change the certificate validation setting:**
+The following procedures describe how to deploy updated SSL/TLS certificates, such as if the certificate has expired.
-1. Enable or disable the **Enable Certificate Validation** toggle. If the option is enabled and validation fails, communication between relevant components is halted and a validation error is presented in the console. If disabled, certificate validation is not carried out. See [Verify CRL server access](how-to-deploy-certificates.md#verify-crl-server-access) for more information.
+# [Deploy a CA-signed certificate](#tab/ca-signed)
-1. Select **Save**.
+**To deploy a CA-signed certificate**:
-For more information about first-time certificate upload, see [First-time sign-in and activation checklist](how-to-activate-and-set-up-your-sensor.md#first-time-sign-in-and-activation-checklist).
+1. Sign into your on-premises management console and select **System Settings** > **SSL/TLS Certificates**.
-## Define backup and restore settings
+1. In the **SSL/TLS Certificates** dialog, select **+ Add Certificate** and enter the following values:
-The on-premises management console system backup is performed automatically, daily. The data is saved on a different disk. The default location is `/var/cyberx/backups`.
+ | Parameter | Description |
+ |||
+ | **Certificate Name** | Enter your certificate name. |
+ | **Passphrase** - *Optional* | Enter a passphrase. |
+ | **Private Key (KEY file)** | Upload a Private Key (KEY file). |
+ | **Certificate (CRT file)** | Upload a Certificate (CRT file). |
+ | **Certificate Chain (PEM file)** - *Optional* | Upload a Certificate Chain (PEM file). |
-You can automatically transfer this file to the internal network.
+ For example:
-> [!NOTE]
-> You can perform the backup and restore procedure on the same version only.
+ :::image type="content" source="media/how-to-deploy-certificates/management-ssl-certificate.png" alt-text="Screenshot of importing a trusted CA certificate." lightbox="media/how-to-deploy-certificates/management-ssl-certificate.png":::
-To back up the on-premises management console machine:
+ If the upload fails, contact your security or IT administrator. For more information, see [SSL/TLS certificate requirements for on-premises resources](best-practices/certificate-requirements.md) and [Create SSL/TLS certificates for OT appliances](ot-deploy/create-ssl-certificates.md).
-- Sign in to an administrative account and enter `sudo cyberx-management-backup -full`.
+1. Select the **Enable Certificate Validation** option to turn on system-wide validation for SSL/TLS certificates with the issuing [Certificate Authority](ot-deploy/create-ssl-certificates.md#create-a-ca-signed-ssltls-certificate) and [Certificate Revocation Lists](ot-deploy/create-ssl-certificates.md#verify-crl-server-access).
-To restore the latest backup file:
+ If this option is turned on and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements).
-- Sign in to an administrative account and enter `$ sudo cyberx-management-system-restore`.
+1. Select **Save** to save your changes.
-To save the backup to an external SMB server:
+# [Create and deploy a self-signed certificate](#tab/windows)
-1. Create a shared folder in the external SMB server.
+Each on-premises management console is installed with a self-signed certificate that we recommend you use only for testing purposes. In production environments, we recommend that you always use a CA-signed certificate.
- Get the folder path, username, and password required to access the SMB server.
+Self-signed certificates lead to a less secure environment, as the owner of the certificate can't be validated and the security of your system can't be maintained.
-2. In Defender for IoT, make a directory for the backups:
+To create a self-signed certificate, download the certificate file from your on-premises management console and then use a certificate management platform to create the certificate files you'll need to upload back to the on-premises management console.
- - `sudo mkdir /<backup_folder_name_on_ server>`
+**To create a self-signed certificate**:
- - `sudo chmod 777 /<backup_folder_name_on_c_server>/`
+1. Go to the on-premises management console's IP address in a browser.
-3. Edit fstab:ΓÇ»
- - `sudo nano /etc/fstab`
+When you're done, use the following procedures to validate your certificate files:
- - `add - //<server_IP>/<folder_path> /<backup_folder_name_on_server> cifs rw,credentials=/etc/samba/user,vers=3.0,uid=cyberx,gid=cyberx,file_mode=0777,dir_mode=0777 0 0`
+- [Verify CRL server access](ot-deploy/create-ssl-certificates.md#verify-crl-server-access)
+- [Import the SSL/TLS certificate to a trusted store](ot-deploy/create-ssl-certificates.md#import-the-ssltls-certificate-to-a-trusted-store)
+- [Test your SSL/TLS certificates](ot-deploy/create-ssl-certificates.md#test-your-ssltls-certificates)
-4. Edit or create credentials for the SMB server to share:
+**To deploy a self-signed certificate**:
- - `sudo nano /etc/samba/user`
+1. Sign into your on-premises management console and select **System settings** > **SSL/TLS Certificates**.
-5. Add:
+1. In the **SSL/TLS Certificates** dialog, keep the default **Locally generated self-signed certificate (insecure, not recommended)** option selected.
- - `username=<user name>`
+1. Select **I CONFIRM** to acknowledge the warning.
- - `password=<password>`
+1. Select the **Enable certificate validation** option to validate the certificate against a [CRL server](ot-deploy/create-ssl-certificates.md#verify-crl-server-access). The certificate is checked once during the import process.
-6. Mount the directory:
+ If this option is turned on and validation fails, communication between relevant components is halted, and a validation error is shown on the sensor. For more information, see [CRT file requirements](best-practices/certificate-requirements.md#crt-file-requirements).
- - `sudo mount -a`
+1. Select **Save** to save your certificate settings.
-7. Configure a backup directory to the shared folder on the Defender for IoT on-premises management console:ΓÇ»
+
- - `sudo nano /var/cyberx/properties/backup.properties`
+### Troubleshoot certificate upload errors
- - `set Backup.shared_location to <backup_folder_name_on_server>`
-## Edit the host name
+## Change the name of the on-premises management console
-To edit the management console's host name configured in the organizational DNS server:
+The default name of your on-premises management console is **Management console**, and is shown in the on-premises management console GUI and troubleshooting logs.
-1. In the management console's left pane, select **System Settings**.
+To change the name of your on-premises management console:
-2. In the console's networking section, select **Network**.
+1. Sign into your on-premises management console and select the name on the bottom-left, just above the version number.
-3. Enter the host name configured in the organizational DNS server.
+1. In the **Edit management console configuration** dialog, enter your new name. The name must have a maximum of 25 characters. For example:
-4. Select **Save**.
+ :::image type="content" source="media/how-to-manage-the-on-premises-management-console/change-name.png" alt-text="Screenshot of how to change the name of your on-premises management console.":::
-## Define VLAN names
+1. Select **Save** to save your changes.
-VLAN names are not synchronized between the sensor and the management console. Define identical names on components.
+## Recover a privileged user password
-In the networking area, select **VLAN** and add names to the discovered VLAN IDs. Then select **Save**.
+If you no longer have access to your on-premises management console as a [privileged user](roles-on-premises.md#default-privileged-on-premises-users), recover access from the Azure portal.
-## Define a proxy to sensors
+**To recover privileged user access**:
-Enhance system security by preventing user sign-in directly to the sensor. Instead, use proxy tunneling to let users access the sensor from the on-premises management console with a single firewall rule. This enhancement narrows the possibility of unauthorized access to the network environment beyond the sensor.
+1. Go to the sign-in page for your on-premises management console and select **Password Recovery**.
-Use a proxy in environments where there's no direct connectivity to sensors.
+1. Select the user that you want to recover access for, either the **Support** or **CyberX** user.
+1. Copy the identifier that's displayed in the **Password Recovery** dialog to a secure location.
-The following procedure connects a sensor to the on-premises management console and enables tunneling on that console:
+1. Go to Defender for IoT in the Azure portal, and make sure that you're viewing the subscription that was used to onboard the OT sensors currently connected to the on-premises management console.
-1. Sign in to the on-premises management console appliance CLI with administrative credentials.
+1. Select **Sites and sensors** > **More actions** > **Recover an on-premises management console password**.
-1. Type `sudo cyberx-management-tunnel-enable` and select **Enter**.
+1. Enter the secret identifier you'd copied earlier from your on-premises management console and select **Recover**.
-1. Type `--port 10000` and select **Enter**.
+ A `password_recovery.zip` file is downloaded from your browser.
-## Adjust system properties
+ [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-System properties control various operations and settings in the management console. Editing or modifying them might damage the management console's operation. Consult with [Microsoft Support](https://support.microsoft.com) before changing your settings.
+1. In the **Password Recovery** dialog on the on-premises management console, select **Upload** and select the `password_recovery.zip` file you'd downloaded.
-To access system properties:
+Your new credentials are displayed.
-1. Sign in to the on-premises management console or the sensor.
+## Edit the host name
-1. Select **System Settings**.
+The on-premises management console's hostname must match the hostname configured in the organizational DNS server.
-1. Select **System Properties** from the **General** section.
+**To edit the hostname saved on the on-premises management console**:
-## Change the name of the on-premises management console
+1. Sign into the on-premises management console and select **System Settings**.
+
+1. In the **Management console networking** area, select **Network**.
-You can change the name of the on-premises management console. The new names appear in the console web browser, in various console windows, and in troubleshooting logs. The default name is **management console**.
+1. Enter the new hostname and select **SAVE** to save your changes.
+
+## Define VLAN names
-To change the name:
+VLAN names aren't synchronized between an OT sensor and the on-premises management console. If you've [defined VLAN names on your OT sensor](how-to-control-what-traffic-is-monitored.md#customize-a-vlan-name), we recommend that you define identical VLAN names on the on-premises management console.
-1. In the bottom of the left pane, select the current name.
+**To define VLAN names**:
- :::image type="content" source="media/how-to-change-the-name-of-your-azure-consoles/console-name.png" alt-text="Screenshot of the on-premises management console version.":::
+1. Sign into the on-premises management console and select **System Settings**.
-1. In the **Edit management console configuration** dialog box, enter the new name. The name can't be longer than 25 characters.
+1. In the **Management console networking** area, select **VLAN**.
- :::image type="content" source="media/how-to-change-the-name-of-your-azure-consoles/edit-management-console-configuration.png" alt-text="Screenshot of editing the Defender for IoT platform configuration.":::
+1. In the **Edit VLAN Configuration** dialog, select **Add VLAN** and then enter your VLAN ID and name, one at a time.
-1. Select **Save**. The new name is applied.
+1. Select **SAVE** to save your changes.
- :::image type="content" source="media/how-to-change-the-name-of-your-azure-consoles/name-changed.png" alt-text="Screenshot that shows the changed name of the console.":::
+## Configure SMTP mail server settings
-## Password recovery
+Define SMTP mail server settings on your on-premises management console so that you configure the on-premises management console to send data to other servers and partner services.
-Password recovery for your on-premises management console is tied to the subscription that the device is attached to. You can't recover a password if you don't know which subscription a device is attached to.
+For example, you'll need an SMTP mail server configured to set up mail forwarding and configure [forwarding alert rules](how-to-forward-alert-information-to-partners.md).
-To reset your password:
+**Prerequisites**:
-1. Go to the on-premises management console's sign-in page.
-1. Select **Password Recovery**.
-1. Copy the unique identifier.
-1. Go to the Defender for IoT **Sites and sensors** page and select the **Recover my password** tab.
-1. Enter the unique identifier and select **Recover**. The activation file is downloaded.
-1. Go to the **Password Recovery** page and upload the activation file.
-1. Select **Next**.
+Make sure you can reach the SMTP server from the on-premises management console.
- You're now given your username and a new system-generated password.
+**To configure an SMTP server on your on-premises management console**:
-> [!NOTE]
-> The sensor is linked to the subscription that it was originally connected to. You can recover the password only by using the same subscription that it's attached to.
+1. Sign into your on-premises management console as a [privileged user](roles-on-premises.md#default-privileged-on-premises-users) via SSH/Telnet.
-## Mail server settings
+1. Run:
-Define SMTP mail server settings for the on-premises management console.
+ ```bash
+ nano /var/cyberx/properties/remote-interfaces.properties
+ ```
-To define:
+1. Enter the following SMTP server details as prompted:
-1. Sign in to the CLI for the on-premises management with administrative credentials.
-1. Type ```nano /var/cyberx/properties/remote-interfaces.properties```.
-1. Select enter. The following prompts appear.
- `mail.smtp_server=`
- `mail.port=25`
- `mail.sender=`
-1. Enter the SMTP server name and sender and select enter.
+ - `mail.smtp_server`
+ - `mail.port`. The default port is `25`.
+ - `mail.sender`
## Next steps For more information, see: -- [Install OT system software](how-to-install-software.md) - [Update OT system software](update-ot-software.md)-- [Manage individual sensors](how-to-manage-individual-sensors.md) - [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the on-premises management console](how-to-troubleshoot-on-premises-management-console.md)
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
# About high availability
-Increase the resiliency of your Defender for IoT deployment by configuring high availability on your on-premises management console. High availability deployments ensure your managed sensors continuously report to an active on-premises management console.
+Increase the resiliency of your Defender for IoT deployment by configuring [high availability](ot-deploy/air-gapped-deploy.md#high-availability-for-on-premises-management-consoles) on your on-premises management console. High availability deployments ensure your managed sensors continuously report to an active on-premises management console.
This deployment is implemented with an on-premises management console pair that includes a primary and secondary appliance. > [!NOTE] > In this document, the principal on-premises management console is referred to as the primary, and the agent is referred to as the secondary.
-## About primary and secondary communication
-
-When a primary and secondary on-premises management console is paired:
--- An on-premises management console SSL certificate is applied to create a secure connection between the primary and secondary appliances. The SSL may be the self-signed certificate installed by default or a certificate installed by the customer.-
- When validation is `ON`, the appliance should be able to establish connection to the CRL server defined by the certificate.
--- The primary on-premises management console data is automatically backed up to the secondary on-premises management console every 10 minutes. The on-premises management console configurations and device data are backed up. PCAP files and logs are not included in the backup. You can back up and restore PCAPs and logs manually.--- The primary setup on the management console is duplicated on the secondary. For example, if the system settings are updated on the primary, they're also updated on the secondary.--- Before the license of the secondary expires, you should define it as the primary in order to update the license.-
-## About failover and failback
-
-If a sensor can't connect to the primary on-premises management console, it automatically connects to the secondary. Your system will be supported by both the primary and secondary simultaneously, if less than half of the sensors are communicating with the secondary. The secondary takes over when more than half of the sensors are communicating with it. Failover from the primary to the secondary takes approximately three minutes. When the failover occurs, the primary on-premises management console freezes. When this happens, you can sign in to the secondary using the same sign-in credentials.
-
-During failover, sensors continue attempts to communicate with the primary appliance. When more than half the managed sensors succeed in communicating with the primary, the primary is restored. The following message appears on the secondary console when the primary is restored:
--
-Sign back in to the primary appliance after redirection.
- ## Prerequisites Before you perform the procedures in this article, verify that you've met the following prerequisites:
Before you perform the procedures in this article, verify that you've met the fo
- Make sure that you have an [on-premises management console installed](./ot-deploy/install-software-on-premises-management-console.md) on both a primary appliance and a secondary appliance. - Both your primary and secondary on-premises management console appliances must be running identical hardware models and software versions.
- - You must be able to access to both the primary and secondary on-premises management consoles as a [privileged user](references-work-with-defender-for-iot-cli-commands.md), for running CLI commands. For more information, see [On-premises users and roles for OT monitoring](roles-on-premises.md).
+ - You must be able to access both the primary and secondary on-premises management consoles as a [privileged user](references-work-with-defender-for-iot-cli-commands.md), for running CLI commands. For more information, see [On-premises users and roles for OT monitoring](roles-on-premises.md).
-- Make sure that the primary on-premises management console is fully [configured](how-to-manage-the-on-premises-management-console.md), including at least two [OT network sensors connected](how-to-manage-individual-sensors.md#connect-a-sensor-to-the-management-console) and visible in the console UI, and scheduled backups or VLAN settings. All settings are applied to the secondary appliance automatically after pairing.
+- Make sure that the primary on-premises management console is fully [configured](how-to-manage-the-on-premises-management-console.md), including at least two [OT network sensors connected](ot-deploy/connect-sensors-to-management.md) and visible in the console UI, as well as the scheduled backups or VLAN settings. All settings are applied to the secondary appliance automatically after pairing.
- Make sure that your SSL/TLS certificates meet required criteria. For more information, see [Deploy OT appliance certificates](how-to-deploy-certificates.md).
Before you perform the procedures in this article, verify that you've met the fo
1. **On the secondary appliance**, use the following steps to copy the connection string to your clipboard:
- 1. Sign in to the secondary on-premises management console, and select **System Settings** on the left.
+ 1. Sign in to the secondary on-premises management console, and select **System Settings**.
1. In the **Sensor Setup - Connection String** area, under **Copy Connection String**, select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view the full connection string.
The core application logs can be exported to the Defender for IoT support team t
**To access the core logs**:
-1. Sign into the on-premises management console and select **System Settings** > **Export**. For more information on exporting logs to send to the support team, see [Export logs from the on-premises management console for troubleshooting](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-logs-from-the-on-premises-management-console-for-troubleshooting).
+1. Sign into the on-premises management console and select **System Settings** > **Export**. For more information on exporting logs to send to the support team, see [Export logs from the on-premises management console for troubleshooting](how-to-troubleshoot-on-premises-management-console.md#export-logs-from-the-on-premises-management-console-for-troubleshooting).
## Update the on-premises management console with high availability
Perform the update in the following order. Make sure each step is complete befor
1. Set up high availability again, on both the primary and secondary appliances. For more information, see [Create the primary and secondary pair](#create-the-primary-and-secondary-pair).
+## Failover process
+
+After setting up high availability, OT sensors automatically connect to a secondary on-premises management console if it can't connect to the primary.
+If less than half of the OT sensors are currently communicating with the secondary machine, your system is supported by both the primary and secondary machines simultaneously. If more than half of the OT sensors are communicating with the secondary machine, the secondary machine takes over all OT sensor communication. Failover from the primary to the secondary machine takes approximately three minutes.
+
+When failover occurs, the primary on-premises management console freezes and you can sign in to the secondary using the same sign-in credentials.
+
+During failover, sensors continue attempts to communicate with the primary appliance. When more than half the managed sensors succeed in communicating with the primary, the primary is restored. The following message appears on the secondary console when the primary is restored:
++
+Sign back in to the primary appliance after redirection.
+
+## Handle expired activation files
+
+Activation files can only be updated on the primary on-premises management console.
+
+Before the activation file expires on the secondary machine, define it as the primary machine so that you can update the license.
+
+For more information, see [Upload a new activation file](how-to-manage-the-on-premises-management-console.md#upload-a-new-activation-file).
## Next steps
-For more information, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md).
+For more information, see [Activate and set up an on-premises management console](ot-deploy/activate-deploy-management.md).
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
Title: Set up SNMP MIB monitoring
-description: Perform sensor health monitoring by using SNMP. The sensor responds to SNMP queries sent from an authorized monitoring server.
Previously updated : 05/31/2022
+ Title: Set up SNMP MIB monitoring on an OT sensor
+description: Learn how to set up your OT sensor for health monitoring via SNMP.
Last updated : 03/22/2023
-# Set up SNMP MIB monitoring
+# Set up SNMP MIB health monitoring on an OT sensor
-Monitor sensor health through the Simple Network Management Protocol (SNMP), as the sensor responds to SNMP requests sent by an authorized monitoring server, and the SNMP monitor polls sensor OIDs periodically (up to 50 times a second).
+This article describes show to configure your OT sensors for health monitoring via an authorized SNMP monitoring server. SNMP queries are sent up to 50 times a second, using UDP over port 161.
-Supported SNMP versions are SNMP version 2 and version 3. The SNMP protocol utilizes UDP as its transport protocol with port 161.
+Setup for SNMP monitoring includes configuring settings on your OT sensor and on your SNMP server. To define Defender for IoT sensors on your SNMP server, either define your settings manually or use a pre-defined SNMP MIB file downloaded from the Azure portal.
-## Prerequisites
--- To set up SNMP monitoring, you must be able to access the OT network sensor as an **Admin** user.-
- For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
--- To download the SNMP MIB file, make sure you can access the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user.
+SNMP queries are sent up to 50 times a second, using UDP over port 161.
- If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
-
-### Prerequisites for AES and 3-DES Encryption Support for SNMP Version 3
--- The network management station (NMS) must support Simple Network Management Protocol (SNMP) Version 3 to be able to use this feature.--- It's important to understand the SNMP architecture and the terminology of the architecture to understand the security model used and how the security model interacts with the other subsystems in the architecture.
+## Prerequisites
-- Before you begin configuring SNMP monitoring, you need to open the port UDP 161 in the firewall.
+Before you perform the procedures in this article, make sure that you have the following:
-## Set up SNMP monitoring
+- **An SNMP monitoring server**, using SNMP versions 2 or 3. If you're using SNMP version 3 and want to use AES and 3-DES encryption, you must also have:
-Set up SNMP monitoring through the OT sensor console.
+ - A network management station (NMS) that supports SNMP version 3
+ - An understanding of SNMP terminology, and the SNMP architecture in your organization
+ - The UDP port 161 open in your firewall
-You can also download the log that contains all the SNMP queries that the sensor receives, including the connection data and raw data, from the same **SNMP MIB monitoring configuration** pane.
+ Have the following details of your SNMP server ready:
-To set up SNMP monitoring:
+ - IP address
+ - Username and password
+ - Authentication type: MD5 or SHA
+ - Encryption type: DES or AES
+ - Secret key
+ - SNMP v2 community string
-1. Sign in to your OT sensor as an **Admin** user.
-1. Select **System Settings** on the left and then, under **Sensor Management**, select **SNMP MIB Monitoring**.
-1. Select **+ Add host** and enter the IP address of the server that performs the system health monitoring. You can add multiple servers.
+- **An OT sensor** [installed](ot-deploy/install-software-ot-sensor.md) and [activated](ot-deploy/activate-deploy-sensor.md), with access as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
- For example:
+To download a pre-defined SNMP MIB file from the Azure portal, you'll need access to the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md).
- :::image type="content" source="media/configure-active-monitoring/set-up-snmp-mib-monitoring.png" alt-text="Screenshot of the SNMP MIB monitoring configuration page." lightbox="media/configure-active-monitoring/set-up-snmp-mib-monitoring.png":::
+## Configure SNMP monitoring settings on your OT sensor
-1. In the **Authentication** section, select the SNMP version:
- - If you select **V2**, type a string in **SNMP v2 Community String**.
+1. Sign into your OT sensor and select **System settings > Sensor management > Health and troubleshooting > SNMP MIB monitoring**.
- You can enter up to 32 characters, and include any combination of alphanumeric characters with no spaces.
+1. In the **SNMP MIB monitoring configuration** pane, select **+ Add host** and enter the following details:
- - If you select **V3**, specify the following parameters:
+ - **Host 1**: Enter the IP address of your SNMP monitoring server. Select **+ Add host** again if you have multiple servers, as many times as needed.
- | Parameter | Description |
- |--|--|
- | **Username** | Enter a unique username. <br><br> The SNMP username can contain up to 32 characters and include any combination of alphanumeric characters with no spaces. <br><br> The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
- | **Password** | Enter a case-sensitive authentication password. <br><br> The authentication password can contain 8 to 12 characters and include any combination of alphanumeric characters. <br><br> The password for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
- | **Auth Type** | Select **MD5** or **SHA-1**. |
- | **Encryption** | Select one of the following: <br>- **DES** (56-bit key size): RFC3414 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3). <br>- **AES** (AES 128 bits supported): RFC3826 The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model. |
- | **Secret Key** | The key must contain exactly eight characters and include any combination of alphanumeric characters. |
+ - **SNMP V2**: Select if you're using SNMP version 2, and then enter your SNMP V2 community string. A community string can have up to 32 alphanumeric characters, and no spaces.
+ - **SNMP V3**: Select if you're using SNMP version 3, and then enter the following details:
+ |Name |Description |
+ |||
+ |**Username** and **Password** | Enter the SNMP v3 credentials used to access the SNMP server. Both usernames and passwords must be configured on both the OT sensor and the SNMP server.<br><br>Usernames can include up to 32 alphanumeric characters, and no spaces. <br><br>Passwords are case-sensitive, and can include 8-12 alphanumeric characters. |
+ |**Auth Type** |Select the authentication type used to access the SNMP server: **MD5** or **SHA** |
+ |**Encryption** | Select the encryption used when communicating with the SNMP server: <br>- **DES** (56-bit key size): RFC3414 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3). <br>- **AES** (AES 128 bits supported): RFC3826 The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model. |
+ |**Secret Key** | Enter a secret key used when communicating with the SNMP server. The secret key must have exactly eight alphanumeric characters. |
-1. When you're done adding servers, select **Save**.
+1. Select **Save** to save your changes.
-## Download the SNMP MIB file
+## Download Defender for IoT's SNMP MIB file
-To download the SNMP MIB file from Defender for IoT in the Azure portal:
+Defender for IoT in the Azure portal provides a downloadable MIB file for you to load into your SNMP monitoring system to pre-define Defender for IoT sensors.
-1. Sign in to the Azure portal.
-1. Select **Sites and sensors > More actions > Download SNMP MIB file**.
+**To download the SNMP MIB file** from [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **More actions** > **Download SNMP MIB file**.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-## Sensor OIDs
+## OT sensor OIDs for manual SNMP configurations
-Use the following table for reference regarding sensor object identifier values (OIDs):
+If you're configuring Defender for IoT sensors on your SNMP monitoring system manually, use the following table for reference regarding sensor object identifier values (OIDs):
| Management console and sensor | OID | Format | Description | |--|--|--|--|
-| Appliance name | 1.3.6.1.2.1.1.5.0 | STRING | Appliance name for the on-premises management console |
-| Vendor | 1.3.6.1.2.1.1.4.0 | STRING | Microsoft Support (support.microsoft.com) |
-| Platform | 1.3.6.1.2.1.1.1.0 | STRING | Sensor or on-premises management console |
-| Serial number | 1.3.6.1.4.1.53313.1 |STRING | String that the license uses |
-| Software version | 1.3.6.1.4.1.53313.2 | STRING | Xsense full-version string and management full-version string |
-| CPU usage | 1.3.6.1.4.1.53313.3.1 | GAUGE32 | Indication for zero to 100 |
-| CPU temperature | 1.3.6.1.4.1.53313.3.2 | STRING | Celsius indication for zero to 100 based on Linux input. <br><br> Any machine that has no actual physical temperature sensor (for example VMs) will return "No sensors found" |
-| Memory usage | 1.3.6.1.4.1.53313.3.3 | GAUGE32 | Indication for zero to 100 |
-| Disk Usage | 1.3.6.1.4.1.53313.3.4 | GAUGE32 | Indication for zero to 100 |
-| Service Status | 1.3.6.1.4.1.53313.5 |STRING | Online or offline if one of the four crucial components is down |
-| Locally/cloud connected | 1.3.6.1.4.1.53313.6 |STRING | Activation mode of this appliance: Cloud Connected / Locally Connected |
-| License status | 1.3.6.1.4.1.53313.7 |STRING | Activation period of this appliance: Active / Expiration Date / Expired |
+| **Appliance name** | 1.3.6.1.2.1.1.5.0 | STRING | Appliance name for the on-premises management console |
+| **Vendor** | 1.3.6.1.2.1.1.4.0 | STRING | Microsoft Support (support.microsoft.com) |
+| **Platform** | 1.3.6.1.2.1.1.1.0 | STRING | Sensor or on-premises management console |
+| **Serial number** | 1.3.6.1.4.1.53313.1 |STRING | String that the license uses |
+| **Software version** | 1.3.6.1.4.1.53313.2 | STRING | Xsense full-version string and management full-version string |
+| **CPU usage** | 1.3.6.1.4.1.53313.3.1 | GAUGE32 | Indication for zero to 100 |
+| **CPU temperature** | 1.3.6.1.4.1.53313.3.2 | STRING | Celsius indication for zero to 100 based on Linux input. <br><br> Any machine that has no actual physical temperature sensor (for example VMs) returns "No sensors found" |
+| **Memory usage** | 1.3.6.1.4.1.53313.3.3 | GAUGE32 | Indication for zero to 100 |
+| **Disk Usage** | 1.3.6.1.4.1.53313.3.4 | GAUGE32 | Indication for zero to 100 |
+| **Service Status** | 1.3.6.1.4.1.53313.5 |STRING | Online or offline if one of the four crucial components is down |
+| **Locally/cloud connected** | 1.3.6.1.4.1.53313.6 |STRING | Activation mode of this appliance: Cloud Connected / Locally Connected |
+| **License status** | 1.3.6.1.4.1.53313.7 |STRING | Activation period of this appliance: Active / Expiration Date / Expired |
Note that: -- Non-existing keys respond with null, HTTP 200.
+- Nonexisting keys respond with null, HTTP 200.
- Hardware-related MIBs (CPU usage, CPU temperature, memory usage, disk usage) should be tested on all architectures and physical sensors. CPU temperature on virtual machines is expected to be not applicable. ## Next steps
-For more information, see [Export troubleshooting logs](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+For more information, see [Maintain OT network sensors from the GUI](how-to-manage-individual-sensors.md).
defender-for-iot How To Troubleshoot On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-on-premises-management-console.md
+
+ Title: Troubleshoot the sensor and on-premises management console
+description: Troubleshoot your sensor and on-premises management console to eliminate any problems you might be having.
Last updated : 06/15/2022++
+# Troubleshoot the on-premises management console
+
+This article describes basic troubleshooting tools for the on-premises management console. In addition to the items described here, you can forward alerts about failed sensor backups and disconnected sensors.
+
+For any other issues, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
+
+## Prerequisites
+
+To perform the procedures in this article, make sure that you have:
+
+- Access to the on-premises management console as a **Support** user. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+## Check system health
+
+Check your system health from the on-premises management console.
+
+**To access the system health tool**:
+
+1. Sign in to the on-premises management console with the *support* user credentials.
+
+1. Select **System Settings** > **System Statistics**.
+
+ :::image type="icon" source="media/tutorial-install-components/system-statistics-icon.png" border="false":::
+
+1. System health data appears. Select an item to view more details in the box. For example:
+
+ :::image type="content" source="media/tutorial-install-components/system-health-check-screen.png" alt-text="Screenshot that shows the system health check.":::
+
+System health checks include the following:
+
+|Name |Description |
+|||
+|**Sanity** | |
+|- Appliance | Runs the appliance sanity check. You can perform the same check by using the CLI command `system-sanity`. |
+|- Version | Displays the appliance version. |
+|- Network Properties | Displays the sensor network parameters. |
+|**Redis** | |
+|- Memory | Provides the overall picture of memory usage, such as how much memory was used and how much remained. |
+|- Longest Key | Displays the longest keys that might cause extensive memory usage. |
+|**System** | |
+|- Core Log | Provides the last 500 rows of the core log, so that you can view the recent log rows without exporting the entire system log. |
+|- Task Manager | Translates the tasks that appear in the table of processes to the following layers: <br><br> - Persistent layer (Redis)<br> - Cache layer (SQL) |
+|- Network Statistics | Displays your network statistics. |
+|- TOP | Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system. |
+|- Backup Memory Check | Provides the status of the backup memory, checking the following:<br><br> - The location of the backup folder<br> - The size of the backup folder<br> - The limitations of the backup folder<br> - When the last backup happened<br> - How much space there is for the extra backup files |
+|- ifconfig | Displays the parameters for the appliance's physical interfaces. |
+|- CyberX nload | Displays network traffic and bandwidth by using the six-second tests. |
+|- Errors from core log | Displays errors from the core log file. |
+
+## Investigate a lack of expected alerts
+
+If you don't see an expected alert on the on-premises **Alerts** page, do the following to troubleshoot:
+
+- Verify whether the alert is already listed as a reaction to a different security instance. If it is, and that alert hasn't yet been handled, a new alert isn't shown elsewhere.
+
+- Verify that the alert isn't being excluded by **Alert Exclusion** rules. For more information, see [Create alert exclusion rules on an on-premises management console](how-to-accelerate-alert-incident-response.md#create-alert-exclusion-rules-on-an-on-premises-management-console).
+
+## Tweak the Quality of Service (QoS)
+
+To save your network resources, you can limit the number of alerts sent to external systems (such as emails or SIEM) in one sync operation between an appliance and the on-premises management console.
+
+The default number of alerts is 50. This means that in one communication session between an appliance and the on-premises management console, there will be no more than 50 alerts to external systems.
+
+To limit the number of alerts, use the `notifications.max_number_to_report` property available in `/var/cyberx/properties/management.properties`. No restart is needed after you change this property.
+
+**To tweak the Quality of Service (QoS)**:
+
+1. Sign into your on-premises management console via SSH to access the [CLI](references-work-with-defender-for-iot-cli-commands.md).
+
+1. Verify the default values:
+
+ ```bash
+ grep \"notifications\" /var/cyberx/properties/management.properties
+ ```
+
+ The following default values appear:
+
+ ```bash
+ notifications.max_number_to_report=50
+ notifications.max_time_to_report=10 (seconds)
+ ```
+
+1. Edit the default settings:
+
+ ```bash
+ sudo nano /var/cyberx/properties/management.properties
+ ```
+
+1. Edit the settings of the following lines:
+
+ ```bash
+ notifications.max_number_to_report=50
+ notifications.max_time_to_report=10 (seconds)
+ ```
+
+1. Save the changes. No restart is required.
+
+## Export logs from the on-premises management console for troubleshooting
+
+For further troubleshooting, you may want to export logs to send to the support team, such as audit or database logs.
+
+**To export log data**:
+
+1. In the on-premises management console, select **System Settings > Export**.
+
+1. In the **Export Troubleshooting Information** dialog:
+
+ 1. In the **File Name** field, enter a meaningful name for the exported log. The default filename uses the current date, such as **13:10-June-14-2022.tar.gz**.
+
+ 1. Select the logs you would like to export.
+
+ 1. Select **Export**.
+
+ The file is exported and is linked from the **Archived Files** list at the bottom of the **Export Troubleshooting Information** dialog.
+
+ For example:
+
+ :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-on-premises-management-console.png" alt-text="Screenshot of the Export Troubleshooting Information dialog in the on-premises management console." lightbox="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-on-premises-management-console.png":::
+
+1. Select the file link to download the exported log, and also select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view its one-time password.
+
+1. To open the exported logs, forward the downloaded file and the one-time password to the support team. Exported logs can be opened only together with the Microsoft support team.
+
+ To keep your logs secure, make sure to forward the password separately from the downloaded log.
+
+## Next steps
+
+- [View alerts](how-to-view-alerts.md)
+
+- [Track on-premises user activity](track-user-activity.md)
defender-for-iot How To Troubleshoot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-sensor.md
+
+ Title: Troubleshoot the sensor
+description: Troubleshoot your sensor to eliminate any problems you might be having.
Last updated : 03/14/2023++
+# Troubleshoot the sensor
+
+This article describes basic troubleshooting tools for the sensor. In addition to the items described here, you can check the health of your system in the following ways:
+
+- **Alerts**: An alert is created when the sensor interface that monitors the traffic is down.
+- **SNMP**: Sensor health is monitored through SNMP. Microsoft Defender for IoT responds to SNMP queries sent from an authorized monitoring server.
+- **System notifications**: When a management console controls the sensor, you can forward alerts about failed sensor backups and disconnected sensors.
+
+For any other issues, contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
+
+## Prerequisites
+
+To perform the procedures in this article, make sure that you have:
+
+- Access to the OT network sensor as a **Support** user. For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
+
+## Check system health
+
+Check your system health from the sensor.
+
+**To access the system health tool**:
+
+1. Sign in to the sensor with the *support* user credentials and select **System Settings** > :::image type="icon" source="media/tutorial-install-components/system-health-check-icon.png" border="false"::: **System health check**.
+
+1. In the **System health check** pane, select a command from the menu to view more details in the box. For example:
+
+ :::image type="content" source="media/tutorial-install-components/system-health-check-sensor.png" alt-text="Screenshot that shows the system health check screen on the sensor console.":::
+
+System health checks include the following:
+
+|Name |Description |
+|||
+|**Sanity** | |
+|- Appliance | Runs the appliance sanity check. You can perform the same check by using the CLI command `system-sanity`. |
+|- Version | Displays the appliance version. |
+|- Network Properties | Displays the sensor network parameters. |
+|**Redis** | |
+|- Memory | Provides the overall picture of memory usage, such as how much memory was used and how much remained. |
+|- Longest Key | Displays the longest keys that might cause extensive memory usage. |
+|**System** | |
+|- Core Log | Provides the last 500 rows of the core log, so that you can view the recent log rows without exporting the entire system log. |
+|- Task Manager | Translates the tasks that appear in the table of processes to the following layers: <br><br> - Persistent layer (Redis)<br> - Cache layer (SQL) |
+|- Network Statistics | Displays your network statistics. |
+|- TOP | Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system. |
+|- Backup Memory Check | Provides the status of the backup memory, checking the following:<br><br> - The location of the backup folder<br> - The size of the backup folder<br> - The limitations of the backup folder<br> - When the last backup happened<br> - How much space there is for the extra backup files |
+|- ifconfig | Displays the parameters for the appliance's physical interfaces. |
+|- CyberX nload | Displays network traffic and bandwidth by using the six-second tests. |
+|- Errors from core log | Displays errors from the core log file. |
+
+### Check system health by using the CLI
+
+Verify that the system is up and running prior to testing the system's sanity.
+
+For more information, see [CLI command reference from OT network sensors](cli-ot-sensor.md).
+
+**To test the system's sanity**:
+
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*.
+
+1. Enter `system sanity`.
+
+1. Check that all the services are green (running).
+
+ :::image type="content" source="media/tutorial-install-components/support-screen.png" alt-text="Screenshot that shows running services.":::
+
+1. Verify that **System is UP! (prod)** appears at the bottom.
+
+Verify that the correct version is used:
+
+**To check the system's version**:
+
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user *support*.
+
+1. Enter `system version`.
+
+1. Check that the correct version appears.
+
+Verify that all the input interfaces configured during the installation process are running:
+
+**To validate the system's network status**:
+
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the *support* user.
+
+1. Enter `network list` (the equivalent of the Linux command `ifconfig`).
+
+1. Validate that the required input interfaces appear. For example, if two quad Copper NICs are installed, there should be 10 interfaces in the list.
+
+ :::image type="content" source="media/tutorial-install-components/interface-list-screen.png" alt-text="Screenshot that shows the list of interfaces.":::
+
+Verify that you can access the console web GUI:
+
+**To check that management has access to the UI**:
+
+1. Connect a laptop with an Ethernet cable to the management port (**Gb1**).
+
+1. Define the laptop NIC address to be in the same range as the appliance.
+
+ :::image type="content" source="media/tutorial-install-components/access-to-ui.png" alt-text="Screenshot that shows management access to the UI." border="false":::
+
+1. Ping the appliance's IP address from the laptop to verify connectivity (default: 10.100.10.1).
+
+1. Open the Chrome browser in the laptop and enter the appliance's IP address.
+
+1. In the **Your connection is not private** window, select **Advanced** and proceed.
+
+1. The test is successful when the Defender for IoT sign-in screen appears.
+
+ :::image type="content" source="media/tutorial-install-components/defender-for-iot-sign-in-screen.png" alt-text="Screenshot that shows access to management console.":::
+
+## Download a diagnostics log for support
+
+This procedure describes how to download a diagnostics log to send to support in connection with a specific support ticket.
+
+This feature is supported for the following sensor versions:
+
+- **22.1.1** - Download a diagnostic log from the sensor console.
+- **22.1.3** - For locally managed sensors, [upload a diagnostics log](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support) from the **Sites and sensors** page in the Azure portal. This file is automatically sent to support when you open a ticket on a cloud-connected sensor.
++
+**To download a diagnostics log**:
+
+1. On the sensor console, select **System settings > Sensor management > Health and troubleshooting > Backup & restore > Backup**.
+
+1. Under **Logs**, select **Support Ticket Diagnostics**, and then select **Export**.
+
+ :::image type="content" source="media/release-notes/support-ticket-diagnostics.png" alt-text="Screenshot of the Backup & Restore pane showing the Support Ticket Diagnostics option." lightbox="media/release-notes/support-ticket-diagnostics.png":::
+
+1. For a locally managed sensor, version 22.1.3 or higher, continue with [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support).
+
+## Retrieve forensics data
+
+The following types of forensic data are stored locally on OT sensors, for devices detected by that sensor:
+
+- Device data
+- Alert data
+- Alert PCAP files
+- Event timeline data
+- Log files
+
+Use the OT sensor's [data mining reports](how-to-create-data-mining-queries.md) or [Azure Monitor workbooks](workbooks.md) on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. Each type of data has a different retention period and maximum capacity.
+
+For more information, see [Data retention across Microsoft Defender for IoT](references-data-retention.md).
+
+## You can't connect by using a web interface
+
+1. Verify that the computer that you're trying to connect is on the same network as the appliance.
+
+1. Verify that the GUI network is connected to the management port.
+
+1. Ping the appliance's IP address. If there's no ping:
+
+ 1. Connect a monitor and a keyboard to the appliance.
+
+ 1. Use the *support* user and password to sign in.
+
+ 1. Use the command `network list` to see the current IP address.
+
+1. If the network parameters are misconfigured, use the following procedure to change them:
+
+ 1. Use the command `network edit-settings`.
+
+ 1. To change the management network IP address, select **Y**.
+
+ 1. To change the subnet mask, select **Y**.
+
+ 1. To change the DNS, select **Y**.
+
+ 1. To change the default gateway IP address, select **Y**.
+
+ 1. For the input interface change (sensor only), select **N**.
+
+ 1. To apply the settings, select **Y**.
+
+1. After restart, connect with the *support* user credentials and use the `network list` command to verify that the parameters were changed.
+
+1. Try to ping and connect from the GUI again.
+
+## The appliance isn't responding
+
+1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
+
+1. Use the *support* user credentials to sign in.
+
+1. Use the `system sanity` command and check that all processes are running. For example:
+
+ :::image type="content" source="media/tutorial-install-components/system-sanity-screen.png" alt-text="Screenshot that shows the system sanity command.":::
+
+For any other issues, contact [Microsoft Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
+
+## Investigate password failure at initial sign-in
+
+When signing into a pre-configured sensor for the first time, you'll need to perform password recovery as follows:
+
+1. On the Defender for IoT sign in screen, select **Password recovery**. The **Password recovery** screen opens.
+
+1. Select either **CyberX** or **Support**, and copy the unique identifier.
+
+1. Navigate to the Azure portal and select **Sites and sensors**.
+
+1. Select the **More Actions** drop down menu and select **Recover on-premises management console password**.
+
+ :::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text=" Screenshot of the recover on-premises management console password option.":::
+
+1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded. Don't extract or modify the zip file.
+
+ :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of the Recover dialog box.":::
+
+1. On the **Password recovery** screen, select **Upload**. **The Upload Password Recovery File** window will open.
+
+1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window.
+
+1. Select **Next**, and your user, and a system-generated password for your management console will then appear.
+
+ > [!NOTE]
+ > When you sign in to a sensor or on-premises management console for the first time, it's linked to your Azure subscription, which you'll need if you need to recover the password for the *cyberx*, or *support* user. For more information, see the relevant procedure for [sensors](manage-users-sensor.md#recover-privileged-access-to-a-sensor) or an [on-premises management console](manage-users-on-premises-management-console.md#recover-privileged-access-to-an-on-premises-management-console).
+
+## Investigate a lack of traffic
+
+An indicator appears at the top of the console when the sensor recognizes that there's no traffic on one of the configured ports. This indicator is visible to all users. When this message appears, you can investigate where there's no traffic. Make sure the span cable is connected and there was no change in the span architecture.
+
+## Check system performance
+
+When a new sensor is deployed or a sensor is working slowly or not showing any alerts, you can check system performance.
+
+1. Sign in to the sensor and select **Overview**. Make sure that **PPS** is greater than 0, and that **Devices** are being discovered.
+1. In the **Data Mining** page, generate a report.
+1. In the **Trends & Statistics** page, create a dashboard.
+1. In the **Alerts** page, check that the alert was created.
+
+## Investigate a lack of expected alerts
+
+If the **Alerts** window doesn't show an alert that you expected, verify the following:
+
+1. Check if the same alert already appears in the **Alerts** window as a reaction to a different security instance. If yes, and this alert hasn't been handled yet, the sensor console doesn't show a new alert.
+1. Make sure you didn't exclude this alert by using the **Alert Exclusion** rules in the management console.
+
+## Investigate dashboard that shows no data
+
+When the dashboards in the **Trends & Statistics** window show no data, do the following:
+
+1. [Check system performance](#check-system-performance).
+1. Make sure the time and region settings are properly configured and not set to a future time.
+
+## Investigate a device map that shows only broadcasting devices
+
+When devices shown on the device map appear not connected to each other, something might be wrong with the SPAN port configuration. That is, you might be seeing only broadcasting devices and no unicast traffic.
+
+1. Validate that you're only seeing the broadcast traffic. To do this, in **Data Mining**, select **Create report**. In **Create new report**, specify the report fields. In **Choose Category**, choose **Select all**.
+1. Save the report, and review it to see if only broadcast and multicast traffic (and no unicast traffic) appears. If so, contact your networking team to fix the SPAN port configuration so that you can see the unicast traffic as well. Alternately, you can record a PCAP directly from the switch, or connect a laptop by using Wireshark.
+
+For more information, see:
+- [Configure traffic mirroring](traffic-mirroring/traffic-mirroring-overview.md)
+- [Upload and play PCAP files](how-to-manage-individual-sensors.md#upload-and-play-pcap-files)
+
+## Connect the sensor to NTP
+
+You can configure a standalone sensor and a management console, with the sensors related to it, to connect to NTP.
+
+To connect a standalone sensor to NTP:
+
+- [See the CLI documentation](./references-work-with-defender-for-iot-cli-commands.md).
+
+To connect a sensor controlled by the management console to NTP:
+
+- The connection to NTP is configured on the management console. All the sensors that the management console controls get the NTP connection automatically.
+
+## Investigate when devices aren't shown on the map, or you have multiple internet-related alerts
+
+Sometimes ICS devices are configured with external IP addresses. These ICS devices aren't shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image. Another indication of the same problem is when multiple internet-related alerts appear. Fix the issue as follows:
+
+1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
+1. Copy the public ranges that are private, and add them to the subnet list. For more information, see [Define ICS or IoT and segregated subnets](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets).
+1. Generate a new data-mining report for internet connections.
+1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
+
+## Clearing sensor data
+
+In cases where the sensor needs to be relocated or erased, all learned data can be cleared from the sensor.
+
+For more information on how to clear system data, see [Clear OT sensor data](how-to-manage-individual-sensors.md#clear-ot-sensor-data).
+
+## Export logs from the sensor console for troubleshooting
+
+For further troubleshooting, you may want to export logs to send to the support team, such as database or operating system logs.
+
+**To export log data**:
+
+1. In the sensor console, go to **System settings** > **Sensor management** > **Backup & restore** > **Backup**.
+
+1. In the **Export Troubleshooting Information** dialog:
+
+ 1. In the **File Name** field, enter a meaningful name for the exported log. The default filename uses the current date, such as **13:10-June-14-2022.tar.gz**.
+
+ 1. Select the logs you would like to export.
+
+ 1. Select **Export**.
+
+ The file is exported and is linked from the **Archived Files** list at the bottom of the **Export Troubleshooting Information** dialog.
+
+ For example:
+
+ :::image type="content" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-sensor.png" alt-text="Screenshot of the export troubleshooting information dialog in the sensor console. " lightbox="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/export-logs-sensor.png":::
+
+1. Select the file link to download the exported log, and also select the :::image type="icon" source="media/how-to-troubleshoot-the-sensor-and-on-premises-management-console/eye-icon.png" border="false"::: button to view its one-time password.
+
+1. To open the exported logs, forward the downloaded file and the one-time password to the support team. Exported logs can be opened only together with the Microsoft support team.
+
+ To keep your logs secure, make sure to forward the password separately from the downloaded log.
+
+> [!NOTE]
+> Support ticket diagnostics can be downloaded from the sensor console and then uploaded directly to the support team in the Azure portal. For more information on downloading diagnostic logs, see [Download a diagnostics log for support](how-to-troubleshoot-sensor.md#download-a-diagnostics-log-for-support).
+
+## Next steps
+
+- [View alerts](how-to-view-alerts.md)
+
+- [Set up SNMP MIB health monitoring on an OT sensor](how-to-set-up-snmp-mib-monitoring.md)
+
+- [Monitor disconnected OT sensors](how-to-manage-sensors-from-the-on-premises-management-console.md#monitor-disconnected-ot-sensors)
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
Title: Troubleshoot the sensor and on-premises management console
-description: Troubleshoot your sensor and on-premises management console to eliminate any problems you might be having.
+ Title: Troubleshoot the OT sensor and on-premises management console
+description: Troubleshoot your OT sensor and on-premises management console to eliminate any problems you might be having.
Last updated 06/15/2022
To connect a sensor controlled by the management console to NTP:
Sometimes ICS devices are configured with external IP addresses. These ICS devices are not shown on the map. Instead of the devices, an internet cloud appears on the map. The IP addresses of these devices are included in the cloud image. Another indication of the same problem is when multiple internet-related alerts appear. Fix the issue as follows: 1. Right-click the cloud icon on the device map and select **Export IP Addresses**.
-1. Copy the public ranges that are private, and add them to the subnet list. Learn more about [configuring subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets).
+1. Copy the public ranges that are private, and add them to the subnet list.
1. Generate a new data-mining report for internet connections. 1. In the data-mining report, enter the administrator mode and delete the IP addresses of your ICS devices.
Sometimes ICS devices are configured with external IP addresses. These ICS devic
In cases where the sensor needs to be relocated or erased, all learned data can be cleared from the sensor.
-For more information on how to clear system data, see [Clearing sensor data](how-to-manage-individual-sensors.md#clearing-sensor-data).
- ### Export logs from the sensor console for troubleshooting For further troubleshooting, you may want to export logs to send to the support team, such as database or operating system logs.
For further troubleshooting, you may want to export logs to send to the support
To keep your logs secure, make sure to forward the password separately from the downloaded log. > [!NOTE]
-> Support ticket diagnostics can be downloaded from the sensor console and then uploaded directly to the support team in the Azure portal. For more information on downloading diagnostic logs, see [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support).
+> Support ticket diagnostics can be downloaded from the sensor console and then uploaded directly to the support team in the Azure portal.
## Troubleshoot an on-premises management console
For further troubleshooting, you may want to export logs to send to the support
- [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md) -- [Understand sensor disconnection events](how-to-manage-sensors-from-the-on-premises-management-console.md#understand-sensor-disconnection-events)- - [Track on-premises user activity](track-user-activity.md)
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
This article describes how to view Defender for IoT alerts on an on-premises man
## Prerequisites -- **To have alerts on the on-premises management console**, you must have an OT network sensor with alerts connected to your on-premises management console. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md) and [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console).
+Before performing the procedures in this article, make sure that you have:
-- **To view alerts the on-premises management console**, sign in as an *Admin*, *Security Analyst*, or *Viewer* user.
+- An on-premises management console [installed](ot-deploy/install-software-on-premises-management-console.md), [activated, and configured](ot-deploy/activate-deploy-management.md). To view alerts by location or zone, make sure that you've [configured sites and zones](ot-deploy/sites-and-zones-on-premises.md) on the on-premises management console.
-- **To manage alerts on the on-premises management console**, sign in as an *Admin* or *Security Analyst* user. Management activities include acknowledging or muting an alert, depending on the alert type.
+- One or more OT sensors [installed](ot-deploy/install-software-ot-sensor.md), [activated, configured](ot-deploy/activate-deploy-sensor.md), and [connected to your on-premises management console](ot-deploy/connect-sensors-to-management.md). To view alerts per zone, make sure that each sensor is assigned to a specific zone.
-For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+- Access to the on-premises management console with one of the following [user roles](roles-on-premises.md):
+
+ - **To view alerts the on-premises management console**, sign in as an *Admin*, *Security Analyst*, or *Viewer* user.
+
+ - **To manage alerts on the on-premises management console**, sign in as an *Admin* or *Security Analyst* user. Management activities include acknowledging or muting an alert, depending on the alert type.
## View alerts on the on-premises management console
For more information, see [On-premises users and roles for OT monitoring with De
- Select **OPEN SENSOR** to open the sensor that generated the alert and continue your investigation. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md).
- - Select **SHOW DEVICES** to show the affected devices on a zone map. For more information, see [View information per zone](how-to-view-information-per-zone.md).
+ - Select **SHOW DEVICES** to show the affected devices on a zone map. For more information, see [Create OT sites and zones on an on-premises management console](ot-deploy/sites-and-zones-on-premises.md).
> [!NOTE] > On the on-premises management console, *New* alerts are called *Unacknowledged*, and *Closed* alerts are called *Acknowledged*. For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).
At the top of the **Alerts** page, use the **Free Search**, **Sites**, **Zones**
- Select **Clear** to remove all filters.
+### View alerts by location
+
+To view alerts from connected OT sensors across your entire global network, use the **Enterprise View** map on an on-premises management console.
+
+1. Sign into your on-premises management console and select **Enterprise View**. The default map view shows your sites at their locations around the world.
+
+ (Optional) Use the **All Sites** and **All Regions** menus at the top of the page to filter your map and display only specific sites, or only specific regions.
+
+1. From the **Default View** menu at the top of the page, select any of the following to drill down to specific types of alerts:
+
+ - **Risk Management**. Highlights site risk alerts, helping you prioritize mitigation activities and plan security improvements.
+ - **Incident Response** Highlights any active (unacknowledged) alerts on each site.
+ - **Malicious Activity**. Highlights malware alerts, which require immediate action.
+ - **Operational Alerts**. Highlights operational alerts, such as PLC stops and firmware or program uploads.
+
+ In any view but the **Default View**, your sites appear in red, yellow, or green. Red sites have alerts that require immediate action, yellow sites have alerts that justify investigation, and green sites require no action.
+
+1. Select any site that's red or yellow, and then select the :::image type="icon" source="media/how-to-work-with-alerts-on-premises-management-console/alerts-icon.png" border="false"::: alerts button for a specific OT sensor to jump to that sensor's current alerts. For example:
+
+ :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/select-alerts-button.png" alt-text="Screenshot showing the Alerts button.":::
+
+ The **Alerts** page opens, automatically filtered to the selected alerts.
+
+### View alerts by zone
+
+To view alerts from connected OT sensors for a specific zone, use the **Site Management** page on an on-premises management console.
+
+1. Sign into your on-premises management console and select **Site Management**.
+
+1. Locate the site and zone you want to view, using the filtering options at the top as needed:
+
+ - **Connectivity**: Select to view only all OT sensors, or only connected / disconnected sensors only.
+ - **Upgrade Status**: Select to view all OT sensors, or only those with a specific [software update status](update-ot-software.md#update-an-on-premises-management-console).
+ - **Business Unit**: Select to view all OT sensors, or only those from a [specific business unit](best-practices/plan-corporate-monitoring.md#plan-ot-sites-and-zones).
+ - **Region**: Select to view all OT sensors, or only those from a [specific region](best-practices/plan-corporate-monitoring.md#plan-ot-sites-and-zones).
+
+1. Select the :::image type="icon" source="media/how-to-work-with-alerts-on-premises-management-console/alerts-icon.png" border="false"::: alerts button for a specific OT sensor to jump to that sensor's current alerts.
+ ## Manage alert status and triage alerts Use the following options to manage alert status on your on-premises management console, depending on the alert type:
The CSV file is generated, and you're prompted to save it locally.
> [!div class="nextstepaction"] > [Microsoft Defender for IoT alerts](alerts.md)-
-> [!div class="nextstepaction"]
-> [Data retention across Microsoft Defender for IoT](references-data-retention.md)
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
To view devices across multiple sensors in a zone, you'll also need an on-premis
## View devices on OT sensor device map
-1. Sign into your OT sensor and select **Device map**. All devices detected by the OT sensor are displayed by default according to [Purdue layer](best-practices/understand-network-architecture.md#purdue-reference-model-and-defender-for-iot).
+1. Sign into your OT sensor and select **Device map**. All devices detected by the OT sensor are displayed by default according to [Purdue layer](best-practices/understand-network-architecture.md).
On the OT sensor's device map:
To see device details, select a device and expand the device details pane on the
### View IT subnets from an OT sensor device map
-By default, IT devices are automatically aggregated by [subnet](how-to-control-what-traffic-is-monitored.md#configure-subnets), so that the map focuses on OT and ICS networks.
+By default, IT devices are automatically aggregated by [subnet](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets), so that the map focuses on your local OT and IoT networks.
**To expand an IT subnet**:
The following table lists available responses for each notification, and when we
| Type | Description | Available responses | Auto-resolve| |--|--|--|--| | **New IP detected** | A new IP address is associated with the device. This may occur in the following scenarios: <br><br>- A new or additional IP address was associated with a device already detected, with an existing MAC address.<br><br> - A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> - An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> - A new IP address was detected for a device that's using a virtual IP address. | - **Set Additional IP to Device**: Merge the devices <br />- **Replace Existing IP**: Replaces any existing IP address with the new address <br /> - **Dismiss**: Remove the notification. |**Dismiss** |
-| **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnets Configuration** and [configure subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets). <br />- **Dismiss**: Remove the notification. |**Dismiss** |
+| **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnets Configuration** and [configure subnets](how-to-control-what-traffic-is-monitored.md#define-ot-and-iot-subnets). <br />- **Dismiss**: Remove the notification. |**Dismiss** |
| **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. |No automatic handling| | **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**<br />Remove the notification. |**Dismiss** | | **Device type changes** | A new device type has been associated with the device. | - **Set as {…}**: Associate the new type with the device.<br />- **Dismiss**: Remove the notification. |No automatic handling|
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
# Maintain threat intelligence packages on OT network sensors
-Microsoft security teams continually run proprietary ICS threat intelligence and vulnerability research. Security research provides security detection, analytics, and response to Microsoft's cloud infrastructure and services, traditional products and deices, and internal corporate resources.
+Microsoft security teams continually run proprietary ICS threat intelligence and vulnerability research. Security research provides security detection, analytics, and response to Microsoft's cloud infrastructure and services, traditional products and devices, and internal corporate resources.
Microsoft Defender for IoT regularly delivers threat intelligence package updates for OT network sensors, providing increased protection from known and relevant threats and insights that can help your teams triage and prioritize alerts.
To perform the procedures in this article, make sure that you have:
- **To download threat intelligence packages from the Azure portal**, you need access to the Azure portal as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
- - **To push threat intelligence updates to cloud-connected OT sensors from the Azure portal**, you need access to Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
+ - **To push threat intelligence updates to cloud-connected OT sensors from the Azure portal**, you need access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
- **To manually upload threat intelligence packages to OT sensors or on-premises management consoles**, you need access to the OT sensor or on-premises management console as an **Admin** user.
For more information, see [Azure user roles and permissions for Defender for IoT
## View the most recent threat intelligence package
-To view the most recent package delivered, in the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**.
+**To view the most recent package available from Defender for IoT**:
-Details about the most recent package available are shown in the **Sensor TI update** pane. For example:
+In the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**. Details about the most recent package available are shown in the **Sensor TI update** pane. For example:
:::image type="content" source="media/how-to-work-with-threat-intelligence-packages/ti-local-update.png" alt-text="Screenshot of the Sensor TI update pane with the most recent threat intelligence package." lightbox="media/how-to-work-with-threat-intelligence-packages/ti-local-update.png":::
Details about the most recent package available are shown in the **Sensor TI upd
Update threat intelligence packages on your OT sensors using any of the following methods: -- [Have updates pushed](#automatically-push-updates-to-cloud-connected-sensors) to cloud-connected OT sensors automatically as they're released-- [Manually push](#manually-push-updates-to-cloud-connected-sensors) updates to cloud-connected OT sensors
+- [Have updates pushed](#automatically-push-updates-to-cloud-connected-sensors) to cloud-connected OT sensors automatically as they're released.
+- [Manually push](#manually-push-updates-to-cloud-connected-sensors) updates to cloud-connected OT sensors.
- [Download an update package](#manually-update-locally-managed-sensors) and manually upload it to your OT sensor. Alternately, upload the package to an on-premises management console and push the updates from there to any connected OT sensors. ### Automatically push updates to cloud-connected sensors
-Threat intelligence packages can be automatically updated to *cloud connected* sensors as they're released by Defender for IoT.
+Threat intelligence packages can be automatically updated to cloud-connected sensors as they're released by Defender for IoT.
-Ensure automatic package update by onboarding your cloud connected sensor with the **Automatic Threat Intelligence Updates** option enabled. For more information, see [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor).
+Ensure automatic package update by onboarding your cloud-connected sensor with the **Automatic Threat Intelligence Updates** option enabled. For more information, see [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor).
**To change the update mode after you've onboarded your OT sensor**:
Ensure automatic package update by onboarding your cloud connected sensor with t
### Manually push updates to cloud-connected sensors
-Your *cloud connected* sensors can be automatically updated with threat intelligence packages. However, if you would like to take a more conservative approach, you can push packages from Defender for IoT to sensors only when you feel it's required. Pushing updates manually gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors.
+Your cloud-connected sensors can be automatically updated with threat intelligence packages. However, if you would like to take a more conservative approach, you can push packages from Defender for IoT to sensors only when you feel it's required. Pushing updates manually gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors.
**To manually push updates to a single OT sensor**:
If you're also working with an on-premises management console, we recommend that
1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**.
-1. In the **Sensor TI update** pane, select **Download** to download the latest threat intelligence file. For example:
+1. In the **Sensor TI update** pane, select **Download** to download the latest threat intelligence file.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
defender-for-iot Manage Subscriptions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md
Before performing the procedures in this article, make sure that you have:
- **In Azure RBAC**: [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration
-## Calculate committed devices for Enterprise IoT monitoring
+### Calculate committed devices for Enterprise IoT monitoring
-If you're adding an Enterprise IoT plan with a monthly commitment, you'll be asked to enter the number of committed devices.
-
-We recommend that you make an initial estimate of your committed devices when onboarding your plan. You can skip this procedure if you're adding a [trial plan](billing.md#free-trial).
+If you're working with a monthly commitment, you'll need to periodically update the number of *committed devices* in your plan as your network grows.
**To calculate committed devices:**:
Use **1700** as the estimated number of committed devices.
For more information, see the [Defender for Endpoint Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery). > [!NOTE]
-> Devices listed on the **Computers & Mobile** tab, including those managed by Defender for Endpoint or otherwise, are not included in the number of committed devices for Defender for IoT.
+> Devices listed on the **Computers & Mobile** tab, including those managed by Defender for Endpoint or otherwise, are not included in the number of [committed devices](billing.md#defender-for-iot-committed-devices) for Defender for IoT.
## Onboard an Enterprise IoT plan
This procedure describes how to add an Enterprise IoT plan to your Azure subscri
Microsoft Defender for IoT provides a 30-day free trial for evaluation purposes, with an unlimited number of devices. For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/).
- Monthly commitments require that you enter the number of committed devices that you'd calculated earlier.
+ Monthly commitments require that you enter the number of [committed devices](#calculate-committed-devices-for-enterprise-iot-monitoring) that you'd calculated earlier.
1. Select the **I accept the terms and conditions** option and then select **Save**.
defender-for-iot Manage Users Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-overview.md
Defender for IoT's integration with Active Directory supports LDAP v3 and the fo
For more information, see: -- [Integrate OT sensor users with Active Directory](manage-users-sensor.md#integrate-ot-sensor-users-with-active-directory)
+- [Configure an Active Directory connection](manage-users-sensor.md#configure-an-active-directory-connection)
- [Integrate on-premises management console users with Active Directory](manage-users-on-premises-management-console.md#integrate-users-with-active-directory)-- [Other firewall rules for external services (optional)](how-to-set-up-your-network.md#other-firewall-rules-for-external-services-optional).-
+- [Other firewall rules for external services (optional)](networking-requirements.md#other-firewall-rules-for-external-services-optional).
### On-premises global access groups
For example, the following diagram shows how you can allow security analysts fro
For more information, see [Define global access permission for on-premises users](manage-users-on-premises-management-console.md#define-global-access-permission-for-on-premises-users). > [!TIP]
-> Access groups and rules help to implement zero-trust strategies by controlling where users manage and analyze devices on Defender for IoT sensors and the on-premises management console. For more information, see [Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md).
+> Access groups and rules help to implement Zero Trust strategies by controlling where users manage and analyze devices on Defender for IoT sensors and the on-premises management console. For more information, see [Zero Trust and your OT/IoT networks](concept-zero-trust.md).
> ## Next steps
For more information, see [Define global access permission for on-premises users
For more information, see: - [Azure user roles and permissions for Defender for IoT](roles-azure.md)-- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
defender-for-iot Manage Users Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-users-sensor.md
When setting up a sensor for the first time, sign in with one of these privilege
For more information, see [Install OT monitoring software on OT sensors](how-to-install-software.md) and [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
-## Add new OT sensor users
-
-This procedure describes how to create new users for a specific OT network sensor.
-
-**Prerequisites**: This procedure is available for the *cyberx*, *support*, and *cyberx_host* users, and any user with the **Admin** role.
-
-**To add a user**:
-
-1. Sign in to the sensor console and select **Users** > **+ Add user**.
-
-1. On the **Create a user | Users** page, enter the following details:
-
- |Name |Description |
- |||
- |**User name** | Enter a meaningful username for the user. |
- |**Email** | Enter the user's email address. |
- |**First Name** | Enter the user's first name. |
- |**Last Name** | Enter the user's last name. |
- |**Role** | Select one of the following user roles: **Admin**, **Security Analyst**, or **Read Only**. For more information, see [On-premises user roles](roles-on-premises.md#on-premises-user-roles). |
- |**Password** | Select the user type, either **Local** or **Active Directory User**. <br><br>For local users, enter a password for the user. Password requirements include: <br>- At least eight characters<br>- Both lowercase and uppercase alphabetic characters<br>- At least one number<br>- At least one symbol<br><br>Local user passwords can only be modified by **Admin** users.|
- > [!TIP]
- > Integrating with Active Directory lets you associate groups of users with specific permission levels. If you want to create users using Active Directory, first configure [Active Directory on the sensor](manage-users-sensor.md#integrate-ot-sensor-users-with-active-directory) and then return to this procedure.
- >
+## Configure an Active Directory connection
-1. Select **Save** when you're done.
-
-Your new user is added and is listed on the sensor **Users** page.
-
-To edit a user, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: icon for the user you want to edit, and change any values as needed.
-
-To delete a user, select the **Delete** button for the user you want to delete.
-
-## Integrate OT sensor users with Active Directory
-
-Configure an integration between your sensor and Active Directory to:
--- Allow Active Directory users to sign in to your sensor-- Use Active Directory groups, with collective permissions assigned to all users in the group
+We recommend configuring on-premises users on your OT sensor with Active Directory, in order to allow Active Directory users to sign in to your sensor and use Active Directory groups, with collective permissions assigned to all users in the group.
For example, use Active Directory when you have a large number of users that you want to assign Read Only access to, and you want to manage those permissions at the group level.
-For more information, see [Active Directory support on sensors and on-premises management consoles](manage-users-overview.md#active-directory-support-on-sensors-and-on-premises-management-consoles).
-
-**Prerequisites**: This procedure is available for the *cyberx* and *support* users, and any user with the **Admin** role.
- **To integrate with Active Directory**: 1. Sign in to your OT sensor and select **System Settings** > **Integrations** > **Active Directory**.
For more information, see [Active Directory support on sensors and on-premises m
|**Primary Domain** | The domain name, such as `subdomain.contoso.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** | |**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as it's defined in your Active Directory configuration on the LDAP server. You'll use these group names when [adding new sensor users](#add-new-ot-sensor-users) with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. | - > [!IMPORTANT] > When entering LDAP parameters: >
For more information, see [Active Directory support on sensors and on-premises m
:::image type="content" source="media/manage-users-sensor/active-directory-integration-example.png" alt-text="Screenshot of the active directory integration configuration on the sensor.":::
+## Add new OT sensor users
+
+This procedure describes how to create new users for a specific OT network sensor.
+
+**Prerequisites**: This procedure is available for the *cyberx*, *support*, and *cyberx_host* users, and any user with the **Admin** role.
+
+**To add a user**:
+
+1. Sign in to the sensor console and select **Users** > **+ Add user**.
+
+1. On the **Create a user | Users** page, enter the following details:
+
+ |Name |Description |
+ |||
+ |**User name** | Enter a meaningful username for the user. |
+ |**Email** | Enter the user's email address. |
+ |**First Name** | Enter the user's first name. |
+ |**Last Name** | Enter the user's last name. |
+ |**Role** | Select one of the following user roles: **Admin**, **Security Analyst**, or **Read Only**. For more information, see [On-premises user roles](roles-on-premises.md#on-premises-user-roles). |
+ |**Password** | Select the user type, either **Local** or **Active Directory User**. <br><br>For local users, enter a password for the user. Password requirements include: <br>- At least eight characters<br>- Both lowercase and uppercase alphabetic characters<br>- At least one number<br>- At least one symbol<br><br>Local user passwords can only be modified by **Admin** users.|
+
+ > [!TIP]
+ > Integrating with Active Directory lets you associate groups of users with specific permission levels. If you want to create users using Active Directory, first [configure an Active Directory connection](#configure-an-active-directory-connection) and then return to this procedure.
+ >
+
+1. Select **Save** when you're done.
+
+Your new user is added and is listed on the sensor **Users** page.
+
+To edit a user, select the **Edit** :::image type="icon" source="media/manage-users-on-premises-management-console/icon-edit.png" border="false"::: icon for the user you want to edit, and change any values as needed.
+
+To delete a user, select the **Delete** button for the user you want to delete.
++ ## Change a sensor user's password This procedure describes how **Admin** users can change local user passwords. **Admin** users can change passwords for themselves or for other **Security Analyst** or **Read Only** users. [Privileged users](#default-privileged-users) can change their own passwords, and the passwords for **Admin** users.
defender-for-iot Monitor Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/monitor-zero-trust.md
For more information, see:
- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Manage on-premises sites and zones](ot-deploy/sites-and-zones-on-premises.md#manage-sites-and-zones) - [Manage site-based access control (Public preview)](manage-users-portal.md#manage-site-based-access-control-public-preview)-- [Visualize Microsoft Defender for IoT data with Azure Monitor workbooks](workbooks.md)
+- [Visualize Microsoft Defender for IoT data with Azure Monitor workbooks](workbooks.md)
defender-for-iot Networking Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/networking-requirements.md
+
+ Title: Networking requirements - Microsoft Defender for IoT
+description: Learn about Microsoft Defender for IoT's networking requirements, from network sensors , on-premises management consoles, and deployment workstations.
Last updated : 01/15/2023+++
+# Networking requirements
+
+This article lists the interfaces that must be accessible on Microsoft Defender for IoT network sensors, on-premises management consoles, and deployment workstations in order for services to function as expected.
+
+Make sure that your organization's security policy allows access for the interfaces listed in the tables below.
+
+## User access to the sensor and management console
+
+| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|--|
+| SSH | TCP | In/Out | 22 | CLI | To access the CLI | Client | Sensor and on-premises management console |
+| HTTPS | TCP | In/Out | 443 | To access the sensor, and on-premises management console web console | Access to Web console | Client | Sensor and on-premises management console |
+
+## Sensor access to Azure portal
+
+| Protocol | Transport | In/Out | Port | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|
+| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.<br><br>Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).|
+
+## Sensor access to the on-premises management console
+
+| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|--|
+| NTP | UDP | In/Out | 123 | Time Sync | Connects the NTP to the on-premises management console | Sensor | On-premises management console |
+| TLS/SSL | TCP | In/Out | 443 | Give the sensor access to the on-premises management console. | The connection between the sensor, and the on-premises management console | Sensor | On-premises management console |
+
+## Other firewall rules for external services (optional)
+
+Open these ports to allow extra services for Defender for IoT.
+
+| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
+|--|--|--|--|--|--|--|--|
+| SMTP | TCP | Out | 25 | Email | Used to open the customer's mail server, in order to send emails for alerts, and events | Sensor and On-premises management console | Email server |
+| DNS | TCP/UDP | In/Out | 53 | DNS | The DNS server port | On-premises management console and Sensor | DNS server |
+| HTTP | TCP | Out | 80 | The CRL download for certificate validation when uploading certificates. | Access to the CRL server | Sensor and on-premises management console | CRL server |
+| [WMI](how-to-configure-windows-endpoint-monitoring.md) | TCP/UDP | Out | 135, 1025-65535 | Monitoring | Windows Endpoint Monitoring | Sensor | Relevant network element |
+| [SNMP](how-to-set-up-snmp-mib-monitoring.md) | UDP | Out | 161 | Monitoring | Monitors the sensor's health | On-premises management console and Sensor | SNMP server |
+| LDAP | TCP | In/Out | 389 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAP server |
+| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server |
+| Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server | On-premises management console and Sensor | Syslog server |
+| LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAPS server |
+| Tunneling | TCP | In | 9000 </br></br> In addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console </br></br> Port 22 from the sensor to the on-premises management console | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
+
+## Next steps
+
+For more information, see [Plan and prepare for deploying a Defender for IoT site](best-practices/plan-prepare-deploy.md).
defender-for-iot Onboard Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/onboard-sensors.md
Title: Onboard sensors to Defender for IoT in the Azure portal description: Learn how to onboard sensors to Defender for IoT in the Azure portal. Previously updated : 06/02/2022 Last updated : 03/02/2023 - zerotrust-services
# Onboard OT sensors to Defender for IoT
-This article describes how to onboard sensors with [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
+This article is one in a series of articles describing the [deployment path](ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to onboard OT network sensors to [Microsoft Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started).
-> [!TIP]
-> As part of the onboarding process, you'll assign your sensor to a site and zone. Segmenting your network by sites and zones is an integral part of implementing a [Zero Trust security strategy](concept-zero-trust.md). Assinging sensors to specific sites and zones will help you monitor for unauthorized traffic crossing segments.
->
-> Data ingested from sensors in the same site or zone can be viewed together, segemented out from other data in your system. If there's sensor data that you want to view grouped together in the same site or zone, make sure to assign sensor sites and zones accordingly.
## Prerequisites
-To perform the procedures in this article, you need:
+Before you onboard an OT network sensor to Defender for IoT, make sure that you have the following:
-- An [OT plan added](how-to-manage-subscriptions.md) in Defender for IoT in the Azure portal.
+- An [OT plan onboarded](getting-started.md) to Defender for IoT
-- A clear understanding of where your OT network sensors are placed in your network, and how you want to [segment your network into sites and zones](concept-zero-trust.md).
+- Access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user.
-## Purchase sensors or download software for sensors
+- An understanding of which [site and zone](best-practices/plan-corporate-monitoring.md#plan-ot-sites-and-zones) you'll want to assign to your sensor.
-This procedure describes how to use the Azure portal to contact vendors for pre-configured appliances, or how to download software for you to install on your own appliances.
+ Assigning sensors to specific sites and zones is an integral part of implementing a [Zero Trust security strategy](concept-zero-trust.md), and will help you monitor for unauthorized traffic crossing segments. For more information, see [List your planned OT sensors](best-practices/plan-prepare-deploy.md#list-your-planned-ot-sensors).
-1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Sensor**.
-
-1. Do one of the following steps:
-
- - **To buy a pre-configured appliance**, select **Contact** under **Buy preconfigured appliance**.
-
- This link opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D)with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
-
- - **To install software on your own appliances**, do the following:
-
- 1. Make sure that you have a supported appliance available. For more information, see [Which appliances do I need?](ot-appliance-sizing.md).
-
- 1. Under **Select version**, select the software version you want to install. We recommend that you always select the most recent version.
-
- 1. Select **Download**. Download the sensor software and save it in a location that you can access from your selected appliance.
-
- [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-
- 1. Install your software. For more information, see [Defender for IoT installation](how-to-install-software.md).
+This step is performed by your deployment teams.
## Onboard an OT sensor
-This procedure describes how to *onboard*, or register, an OT network sensor with Defender for IoT and download a sensor activation file.
+This procedure describes how to onboard an OT network sensor with Defender for IoT and download a sensor activation file.
**To onboard your OT sensor to Defender for IoT**:
This procedure describes how to *onboard*, or register, an OT network sensor wit
Alternately, from the Defender for IoT **Sites and sensors** page, select **Onboard OT sensor** > **OT**.
-1. By default, on the **Set up OT/ICS Security** page, **Step 1: Did you set up a sensor?** and **Step 2: Configure SPAN port or TAPΓÇï** of the wizard are collapsed. If you haven't completed these steps, do so before continuing. For more information, see:
+1. By default, on the **Set up OT/ICS Security** page, **Step 1: Did you set up a sensor?** and **Step 2: Configure SPAN port or TAPΓÇï** of the wizard are collapsed.
+
+ You'll install software and configure traffic mirroring later on in the deployment process, but should have your appliances ready and traffic mirroring method planned. For more information, see:
+ - [Prepare on-premises appliances](best-practices/plan-prepare-deploy.md#prepare-on-premises-appliances)
- [Choose a traffic mirroring method for traffic monitoring](best-practices/traffic-mirroring-methods.md)
- - [Install OT monitoring software on OT sensors](ot-deploy/install-software-ot-sensor.md)
1. In **Step 3: Register this sensor with Microsoft Defender for IoT** enter or select the following values for your sensor: 1. In the **Sensor name** field, enter a meaningful name for your OT sensor.
- We recommend including your OT sensor's IP address as part of the name, or using another easily identifiable name. You want to keep track of the registration name in the Azure portal and the IP address of the sensor shown in the OT sensor console.
+ We recommend including your OT sensor's IP address as part of the name, or using another easily identifiable name. You'll want to keep track of the registration name in the Azure portal and the IP address of the sensor shown in the OT sensor console.
1. In the **Subscription** field, select your Azure subscription. If you don't yet have a subscription to select, select **Onboard subscription** to [add an OT plan to your Azure subscription](getting-started.md).
- 1. (Optional) Toggle on the **Cloud connected** option to have your OT sensor connected to Azure services, such as Microsoft Sentinel. For more information, see [Cloud-connected vs. local OT sensors](architecture.md#cloud-connected-vs-local-ot-sensors).
+ 1. (Optional) Toggle on the **Cloud connected** option to view detected data and manage your sensor from the Azure portal, and to connect your data to other Microsoft services, such as Microsoft Sentinel.
+
+ For more information, see [Cloud-connected vs. local OT sensors](architecture.md#cloud-connected-vs-local-ot-sensors).
1. (Optional) Toggle on the **Automatic Threat Intelligence updates** to have Defender for IoT automatically push [threat intelligence packages](how-to-work-with-threat-intelligence-packages.md) to your OT sensor.
This procedure describes how to *onboard*, or register, an OT network sensor wit
1. In the **Site** section, enter the following details to define your OT sensor's site.
- - In the **Resource name** field, select the site you want to use for your OT sensor, or select **Create site** to create a new one.
-
- - In the **Display name** field, enter a meaningful name for your site to be shown across Defender for IoT in Azure.
-
- - (Optional) In the **Tags** > **Key** and **Value** fields, enter tag values to help you identify and locate your site and sensor in the Azure portal.
+ |Field |Description |
+ |||
+ |**Resource name** | In the **Resource name** field, select the site you want to use for your OT sensor, or select **Create site** to create a new one. |
+ |**Display name** | In the **Display name** field, enter a meaningful name for your site to be shown across Defender for IoT in Azure. |
+ |**Tags** (Optional) | In the **Tags** > **Key** and **Value** fields, enter tag values to help you identify and locate your site and sensor in the Azure portal. |
1. In the **Zone** field, select the zone you want to use for your OT sensor, or select **Create zone** to create a new one. For example:
- :::image type="content" source="media/sites-and-zones/sites-and-zones-azure.png" alt-text="Screenshot of the Set up OT/ICS Security page with site and zone details defined":::
+ :::image type="content" source="media/sites-and-zones/sites-and-zones-azure.png" alt-text="Screenshot of the Set up OT/ICS Security page with site and zone details defined." lightbox="media/sites-and-zones/sites-and-zones-azure.png":::
1. When you're done with all other fields, select **Register**.
-A success message appears and your activation file is automatically downloaded, and your sensor is now shown under the configured site on the Defender for IoT **Sites and sensors** page.
-
-Until you activate your sensor, the sensor's status shows as **Pending Activation**.
+A success message appears and your activation file is automatically downloaded. Your sensor is now shown under the configured site on the Defender for IoT **Sites and sensors** page.
-Make the downloaded activation file accessible to the sensor console admin so that they can activate the sensor.
+Until you activate your sensor, the sensor's status will show as **Pending Activation**. Make the downloaded activation file accessible to the sensor console admin so that they can activate the sensor.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
+> [!NOTE]
+> Sites and zones configured on the Azure portal are not synchronized with [sites and zones configured on an on-premises management console](ot-deploy/sites-and-zones-on-premises.md).
+>
+> If you're working with a large deployment, we recommend that you use the Azure portal to manage cloud-connected sensors, and an on-premises management console to manage locally-managed sensors.
+ ## Next steps -- [Install OT agentless monitoring software](how-to-install-software.md)-- [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md)-- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Manage individual sensors](how-to-manage-individual-sensors.md)-- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
+> [!div class="step-by-step"]
+> [« Prepare an OT site deployment](best-practices/plan-prepare-deploy.md)
+
+> [!div class="step-by-step"]
+> [Configure traffic mirroring ┬╗](traffic-mirroring/traffic-mirroring-overview.md)
defender-for-iot Ot Appliance Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-appliance-sizing.md
# Which appliances do I need?
-This article is designed to help you choose the right OT appliances for your sensors and on-premises management consoles. Use the tables below to understand which hardware profile best fits your organization's network monitoring needs.
+This article is one in a series of articles describing the [deployment path](ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes is intended to help you choose the right appliances for your system and which hardware profile best fits your organization's network monitoring needs.
-[Physical](ot-pre-configured-appliances.md) or [virtual](ot-virtual-appliances.md) appliances can be used; results depend on hardware and resources available to the monitoring sensor.
+You can use [physical](ot-pre-configured-appliances.md) or [virtual](ot-virtual-appliances.md) appliances. Results depend on hardware and resources available to the monitoring sensor.
-> [!NOTE]
+
+> [!IMPORTANT]
> The performance, capacity, and activity of an OT/IoT network may vary depending on its size, capacity, protocols distribution, and overall activity. For deployments, it is important to factor in raw network speed, the size of the network to monitor, and application configuration. The selection of processors, memory, and network cards is heavily influenced by these deployment configurations. The amount of space needed on your disk will differ depending on how long you store data, and the amount and type of data you store. > >*Performance values are presented as upper thresholds under the assumption of intermittent traffic profiles, such as those found in OT/IoT systems and machine-to-machine communication networks.*
+> [!NOTE]
+> This article also includes information relevant for on-premises management consoles. For more information, see the [Air-gapped OT sensor management deployment path](ot-deploy/air-gapped-deploy.md).
+>
+ ## IT/OT mixed environments Use the following hardware profiles for high bandwidth corporate IT/OT mixed networks:
On-premises management consoles allow you to manage and monitor large, multiple-
## Next steps
-Continue understanding system requirements, including options for ordering pre-configured appliances, or required specifications to install software on your own appliances:
--- [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md)-- [Resource requirements for virtual appliances](ot-virtual-appliances.md)-
-Then, use any of the following procedures to continue:
--- [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](how-to-install-software.md)-
-Reference articles for OT monitoring appliances also include installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances.
+> [!div class="step-by-step"]
+> [« Prepare an OT site deployment](best-practices/plan-prepare-deploy.md)
defender-for-iot Activate Deploy Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/activate-deploy-management.md
+
+ Title: Activate and set up an on-premises management console - Microsoft Defender for IoT
+description: Learn how to activate and set up an on-premises management console when deploying your Microsoft Defender for IoT system for OT network monitoring.
Last updated : 01/16/2023+++
+# Activate and set up an on-premises management console
+
+This article is one in a series of articles describing the [deployment path](air-gapped-deploy.md) for a Microsoft Defender for IoT on-premises management console for air-gapped OT sensors.
++
+When working in an air-gapped or hybrid operational technology (OT) environment with multiple sensors, use an on-premises management console to configure settings and view data in a central location for all connected OT sensors.
+
+This article describes how to activate your on-premises management console and configure settings for an initial deployment.
+
+## Prerequisites
+
+Before performing the procedures in this article, you need to have:
+
+- An [on-premises management console installed](install-software-on-premises-management-console.md)
+
+- Access to the on premises management console as one of the [privileged users supplied during installation](install-software-on-premises-management-console.md#users)
+
+- An SSL/TLS certificate. We recommend using a CA-signed certificate, and not a self-signed certificate. For more information, see [Create SSL/TLS certificates for OT appliances](create-ssl-certificates.md).
+
+- Access to the Azure portal as a [Security Admin](../../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../../role-based-access-control/built-in-roles.md#owner) user
+
+## Sign in to your on-premises management console
+
+During the [software installation process](install-software-on-premises-management-console.md#users), you'll have received a set of credentials for privileged access. We recommend using the **Support** credentials when signing into the on-premises management console for the first time.
+
+For more information, see [Default privileged on-premises users](../roles-on-premises.md#default-privileged-on-premises-users).
+
+In a browser, go to the on-premises management console's IP address, and enter the username and password.
+
+> [!NOTE]
+> If you forgot your password, select **Password recovery** to reset the password. For more information, see [Recover a privileged user password](../how-to-manage-the-on-premises-management-console.md#recover-a-privileged-user-password).
+>
+
+## Activate the on-premises management console
+
+Activate your on-premises management console using a downloaded file from the Azure portal. Defender for IoT activation files track the number of committed devices detected by connected OT sensors against the number of committed devices in your OT plan.
+
+If your sensors detect more devices than you have included in your plan, update the number of committed devices. For more information, see [Manage OT plans on Azure subscriptions](../how-to-manage-subscriptions.md).
+
+**To activate**:
+
+1. After signing into the on-premises management console for the first time, you'll see a message prompting you to take action for a missing activation file. In the message bar, select the **Take action** link.
+
+ An **Activation** dialog shows the number of monitored devices and registered committed devices. Since you're just starting the deployment, both of these values should be **0**.
+
+1. Select the link to the **Azure portal** to jump to Defender for IoT's **Plans and pricing** page in the Azure portal.
+
+1. In the **Plans** grid, select one or more subscriptions.
+
+ If you select multiple subscriptions, the activation file is associated with all selected subscriptions and the number of committed devices defined at the time of download.
+
+ If you don't see the subscription that you're looking for, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](../../../azure-portal/set-preferences.md).
+
+1. In the toolbar, select **Download on-premises management console activation file**. For example:
+
+ :::image type="content" source="../media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png" alt-text="Screenshot that shows selecting multiple subscriptions." lightbox="../media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png":::
+
+ The activation file downloads.
+
+ [!INCLUDE [root-of-trust](../includes/root-of-trust.md)]
+
+1. Return to your on-premises management console. In the **Activation** dialog, select **CHOOSE FILE** and select the downloaded activation file.
+
+ A confirmation message appears to confirm that the file's been uploaded successfully.
+
+> [!NOTE]
+> You'll need to upload a new activation file in specific cases, such as if you modify the number of committed devices in your OT plan after having uploaded your initial activation file, or if you've [deleted your OT plan](../how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks) from the subscription that the previous activation file was associated with.
+>
+> For more information, see [Upload a new activation file](../how-to-manage-the-on-premises-management-console.md#upload-a-new-activation-file).
+
+## Deploy an SSL/TLS certificate
+
+The following procedures describe how to deploy an SSL/TLS certificate on your OT sensor. We recommend using CA-signed certificates in production environments.
+
+The requirements for SSL/TLS certificates are the same for OT sensors and on-premises management consoles. For more information, see:
+
+- [SSL/TLS certificate requirements for on-premises resources](../best-practices/certificate-requirements.md)
+- [Create SSL/TLS certificates for OT appliances](create-ssl-certificates.md)
+
+**To upload a CA-signed certificate**:
+
+1. Sign into your on-premises management console and select **System settings** > **SSL/TLS Certificates**.
+
+1. In the **SSL/TLS Certificates** dialog, select **Add Certificate**.
+
+1. In the **Import a trusted CA-signed certificate** area, enter a certificate name and optional passphrase, and then upload your CA-signed certificate files.
+
+1. (Optional) Clear the **Enable certificate validation** option to avoid validating the certificate against a CRL server.
+
+1. Select **SAVE** to save your certificate settings.
+
+For more information, see [Troubleshoot certificate upload errors](../how-to-manage-the-on-premises-management-console.md#troubleshoot-certificate-upload-errors).
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Install Microsoft Defender for IoT on-premises management console software](install-software-on-premises-management-console.md)
+
+> [!div class="step-by-step"]
+> [Connect OT network sensors to the on-premises management console ┬╗](connect-sensors-to-management.md)
defender-for-iot Activate Deploy Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/activate-deploy-sensor.md
+
+ Title: Activate and set up an OT network sensor - Microsoft Defender for IoT
+description: Learn how to activate and set up a Microsoft Defender for IoT OT network sensor.
Last updated : 01/22/2023+++
+# Activate and set up your OT network sensor
+
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to activate and set up an OT sensor for the first time.
++
+## Prerequisites
+
+To perform the procedures in this article, you need:
+
+- An OT sensor [onboarded](../onboard-sensors.md) to Defender for IoT in the Azure portal and [installed](install-software-ot-sensor.md) or [purchased](../ot-pre-configured-appliances.md).
+
+- The sensor's activation file, which was downloaded after [onboarding your sensor](../onboard-sensors.md). You need a unique activation file for each OT sensor you deploy.
+
+ [!INCLUDE [root-of-trust](../includes/root-of-trust.md)]
+
+- A SSL/TLS certificate. We recommend using a CA-signed certificate, and not a self-signed certificate. For more information, see [Create SSL/TLS certificates for OT appliances](create-ssl-certificates.md).
+
+- Access to the OT sensor as an **Admin** user, or as one of the [privileged users supplied during installation](install-software-ot-sensor.md#credentials)
+
+ If you purchased a [preconfigured sensor](../ot-pre-configured-appliances.md), you're prompted to generate a password when signing in for the first time.
+
+This step is performed by your deployment teams.
+
+## Sign in to your OT sensor
+
+If you installed software on a sensor appliance yourself, you'll have received a set of credentials for privileged access during the [software installation process](install-software-ot-sensor.md#credentials). If you purchased pre-configured appliances, your credentials will have been delivered with the appliance.
+
+We recommend using the **Support** credentials when signing into the OT sensor for the first time. For more information, see [Default privileged on-premises users](../roles-on-premises.md#default-privileged-on-premises-users).
+
+**To sign in to your OT sensor**:
+
+1. In a browser, go to the OT sensor's IP address, and enter the username and password.
+
+ > [!NOTE]
+ > If you forgot your password, select **Reset Password**. For more information, see [Investigate password failure at initial sign-in](../how-to-troubleshoot-sensor.md#investigate-password-failure-at-initial-sign-in).
+ >
+
+ If you're working with a pre-configured appliance, you're prompted to enter a new password.
+
+1. Select **Login** to continue with the deployment wizard.
+
+## Confirm network settings
+
+The first time you sign in to your OT network sensors, you're prompted to confirm the OT sensor's network settings. These network settings were configured during installation or when you purchased a preconfigured OT sensor.
+
+**In the deployment wizard's *Sensor network settings* page**:
+
+1. Confirm or enter the following details for your OT sensor:
+
+ - **IP address**: Changing the IP address may require you to sign in again.
+ - **Subnet mask**
+ - **Default gateway**
+ - **DNS**
+ - **Hostname** (optional). If defined, make sure that the hostname matches the name that's configured in your organization's DNS server.
+
+1. If you're connecting via a proxy, select **Enable Proxy**, and then enter the following details for your proxy server
+
+ - **Proxy host**
+ - **Proxy port**
+ - **Proxy username** (optional)
+
+1. Select **Next** to continue in the **Activation** screen.
+
+## Activate the OT sensor
+
+Activate your OT sensor to connect it to your [Azure subscription and OT plan](../how-to-manage-subscriptions.md) and enforce the configured number of committed devices.
+
+**In the deployment wizard's *Sensor Activation* screen**:
+
+1. Select **Upload** to upload the activation file you'd downloaded after [onboarding your sensor to Azure](../onboard-sensors.md).
+
+ Make sure that the confirmation message includes the name of the sensor that you're deploying.
+
+1. Select the **Approve these terms and conditions** option, and then select **Activate** to continue in the **Certificates** screen.
+
+> [!NOTE]
+> Both cloud-connected and locally-managed sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. You may need to [re-activate your sensor](../how-to-manage-individual-sensors.md#upload-a-new-activation-file) in specific cases, such as if you're changing from a locally-managed sensor to a cloud-connected sensor.
+>
+
+## Deploy an SSL/TLS certificate
+
+The following procedures describe how to deploy an SSL/TLS certificate on your OT sensor. We recommend that you use a [CA-signed certificate](create-ssl-certificates.md) for all production environments.
+
+If you're working on a testing environment, you can also use the self-signed certificate that's generated during installation. For more information, see [Manage SSL/TLS certificates](../how-to-manage-individual-sensors.md#manage-ssltls-certificates).
+
+**In the deployment wizard's *SSL/TLS Certificate* screen**:
+
+1. Select **Import trusted CA certificate (recommended)** to deploy a CA-signed certificate.
+
+ Enter the certificates name and passphrase, and then select **Upload** to upload your private key file, certificate file, and an optional certificate chain file.
+
+ You may need to refresh the page after uploading your files.
+
+1. Select **Enable certificate validation** to force your sensor to validate the certificate against a certificate revocation list (CRL), as [configured in your certificate](../best-practices/certificate-requirements.md#crt-file-requirements).
+
+1. Select **Save** to open your OT sensor console.
+
+For more information, see [Troubleshoot certificate upload errors](../how-to-manage-individual-sensors.md#troubleshoot-certificate-upload-errors).
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Validate after installing software](post-install-validation-ot-software.md)
+
+> [!div class="step-by-step"]
+> [Configure proxy settings on an OT sensor ┬╗](../connect-sensors.md)
defender-for-iot Air Gapped Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/air-gapped-deploy.md
+
+ Title: On-premises management console deployment path - Microsoft Defender for IoT
+description: Learn about the steps involved in deploying a Microsoft Defender for IoT on-premises management console to centrally manage and view data from multiple locally-managed, air-gapped OT sensors.
+ Last updated : 02/22/2023++
+# Deploy air-gapped OT sensor management
+
+When you're working with multiple, air-gapped OT sensors that can't be managed by the Azure portal, we recommend deploying an on-premises management console to manage your air-gapped OT sensors.
+
+The following image describes the steps included in deploying an on-premises management console. Learn more about each deployment step in the sections below, including relevant cross-references for more details.
+
+Deploying an on-premises management console is done by your deployment team. You can deploy an on-premises management console before or after you deploy your OT sensors, or in parallel.
++
+## Deployment steps
+
+|Step |Description |
+|||
+|**[Prepare an on-premises management console appliance](prepare-management-appliance.md)** | Just as you'd [prepared an on-premises appliance](../best-practices/plan-prepare-deploy.md#prepare-on-premises-appliances) for your OT sensors, prepare an appliance for your on-premises management console. To deploy a CA-signed certificate for production environments, make sure to prepare your certificate as well. |
+|**[Install Microsoft Defender for IoT on-premises management console software](install-software-on-premises-management-console.md)** | Download installation software from the Azure portal and install it on your on-premises management console appliance. |
+|**[Activate and set up an on-premises management console](activate-deploy-management.md)** | Use an activation file downloaded from the Azure portal to activate your on-premises management console. |
+|**[Create OT sites and zones on an on-premises management console](sites-and-zones-on-premises.md)** | If you're working with a large, air-gapped deployment, we recommend creating sites and zones on your on-premises management console, which help you monitor for unauthorized traffic crossing network segments, and is part of deploying Defender for IoT with [Zero Trust](/security/zero-trust/zero-trust-overview) principles. |
+|**[Connect OT network sensors to the on-premises management console](connect-sensors-to-management.md)** | Connect your air-gapped OT sensors to your on-premises management console to view aggregated data and configure further settings across all connected systems. |
+
+> [!NOTE]
+> Sites and zones configured on the Azure portal are not synchronized with [sites and zones configured on an on-premises management console](sites-and-zones-on-premises.md).
+>
+> When working with a large deployment, we recommend that you use the Azure portal to manage cloud-connected sensors, and an on-premises management console to manage locally-managed sensors.
+
+## Optional configurations
+
+When deploying an on-premises management console, you may also want to configure the following options:
+
+- [Active Directory integration](../manage-users-on-premises-management-console.md#integrate-users-with-active-directory), to allow Active Directory users to sign into your on-premises management console, use Active Directory groups, and configure global access groups.
+
+- [Proxy tunneling access](#access-ot-network-sensors-via-proxy-tunneling) from OT network sensors, enhancing system security across your Defender for IoT system
+
+- [High availability](#high-availability-for-on-premises-management-consoles) for on-premises management consoles, lowering the risk on your OT sensor management resources
+
+#### Access OT network sensors via proxy tunneling
+
+You might want to enhance your system security by preventing the on-premises management console to access OT sensors directly.
+
+In such cases, configure proxy tunneling on your on-premises management console to allow users to connect to OT sensors via the on-premises management console. For example:
++
+Once signed into the OT sensor, user experience remains the same. For more information, see [Configure OT sensor access via tunneling](connect-sensors-to-management.md#configure-ot-sensor-access-via-tunneling).
+
+#### High availability for on-premises management consoles
+
+When deploying a large OT monitoring system with Defender for IoT, you may want to use a pair of primary and secondary machines for high availability on your on-premises management console.
+
+When using a high availability architecture:
+
+|Feature |Description |
+|||
+|**Secure connections** | An on-premises management console SSL/TLS certificate is applied to create a secure connection between the primary and secondary appliances. Use a CA-signed certificate or the self-signed certificate generated during installation. For more information, see: <br>- [SSL/TLS certificate requirements for on-premises resources](../best-practices/certificate-requirements.md) <br>- [Create SSL/TLS certificates for OT appliances](create-ssl-certificates.md) <br>- [Manage SSL/TLS certificates](../how-to-manage-the-on-premises-management-console.md#manage-ssltls-certificates) |
+|**Data backups** | The primary on-premises management console data is automatically backed up to the secondary on-premises management console every 10 minutes. <br><br>For more information, see [Backup and restore the on-premises management console](../back-up-restore-management.md). |
+|**System settings** | The system settings defined on the primary on-premises management console are duplicated on the secondary. For example, if the system settings are updated on the primary, they're also updated on the secondary. |
+
+For more information, see [About high availability](../how-to-set-up-high-availability.md).
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [Prepare an on-premises management console appliance ┬╗](prepare-management-appliance.md)
defender-for-iot Connect Sensors To Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/connect-sensors-to-management.md
+
+ Title: Connect OT network sensors to an on-premises management console - Microsoft Defender for IoT
+description: Learn how to connect your OT network sensors to an on-premises management console.
Last updated : 01/16/2023+++
+# Connect OT network sensors to the on-premises management console
+
+This article is one in a series of articles describing the [deployment path](air-gapped-deploy.md) for a Microsoft Defender for IoT on-premises management console for air-gapped OT sensors.
++
+After you've installed and configured your OT network sensors, you can connect them to your on-premises management console for central management and network monitoring.
+
+## Prerequisites
+
+To perform the procedures in this article, make sure that you have:
+
+- An on-premises management console [installed](install-software-on-premises-management-console.md), [activated, and configured](activate-deploy-management.md)
+
+- One or more OT sensors [installed](install-software-ot-sensor.md), [activated, and configured](activate-deploy-sensor.md). To assign your OT sensor to a site and zone, make sure that you have at least one site and zone configured.
+
+- Access to both your on-premises management console and OT sensors as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md).
+
+- To configure access to your OT sensors via proxy tunneling, make sure that you have access to the on-premises management console's CLI as a [privileged user](../references-work-with-defender-for-iot-cli-commands.md#privileged-user-access-for-ot-monitoring).
+
+## Connect OT sensors to the on-premises management console
+
+To connect OT sensors to the on-premises management console, copy a connection string from the on-premises management console and paste it as needed in your OT sensor console.
+
+**On your on-premises management console**:
+
+1. Sign into your on-premises management console and select **System Settings** and scroll down to see the **Sensor Setup - Connection String** area. For example:
+
+ :::image type="content" source="../media/how-to-manage-sensors-from-the-on-premises-management-console/connection-string.png" alt-text="Screenshot that shows copying the connection string for the sensor.":::
+
+1. Copy the string in the **Copy Connection String** box to the clipboard.
+
+**On your OT sensor**:
+
+1. Sign into your OT sensor and select **System settings > Basic > Sensor Setup > Connection to management console**.
+
+1. In the **Connection String** field, paste the connection string you'd copied from the on-premises management console, and select **Connect**.
+
+After you've connected your OT sensors to your on-premises management console, you'll see those sensors listed on the on-premises management console's **Site Management** page as **Unassigned sensors**.
+
+> [!TIP]
+> When you [create sites and zones](sites-and-zones-on-premises.md), assign each sensor to a zone to [monitor detected data segmented separately](../monitor-zero-trust.md).
+>
+
+## Configure OT sensor access via tunneling
+
+You might want to enhance your system security by preventing the on-premises management console to access OT sensors directly.
+
+In such cases, configure [proxy tunneling](air-gapped-deploy.md#access-ot-network-sensors-via-proxy-tunneling) on your on-premises management console to allow users to connect to OT sensors via the on-premises management console. No configuration is needed on the sensor.
+
+While the default port used to access OT sensors via proxy tunneling is `9000`, modify this value to a different port as needed.
+
+**To configure OT sensor access via tunneling**:
+
+1. Sign into the on-premises management console's CLI via Telnet or SSH using a [privileged user](../references-work-with-defender-for-iot-cli-commands.md#privileged-user-access-for-ot-monitoring).
+
+1. Run:
+
+ ```bash
+ sudo cyberx-management-tunnel-enable
+ ```
+
+1. Allow a few minutes for the connection to start.
+
+When tunneling access is configured, the following URL syntax is used to access the sensor consoles: `https://<on-premises management console address>/<sensor address>/<page URL>`
+
+**To customize the port used with proxy tunneling**:
+
+1. Sign into the on-premises management console's CLI via Telnet or SSH using a [privileged user](../references-work-with-defender-for-iot-cli-commands.md#privileged-user-access-for-ot-monitoring).
+
+1. Run:
+
+ ```bash
+ sudo cyberx-management-tunnel-enable --port <port>
+ ```
+
+ Where `<port>` is the value of the port you want to use for proxy tunneling.
+
+**To remove the proxy tunneling configuration**:
+
+1. Sign into the on-premises management console's CLI via Telnet or SSH using a [privileged user](../references-work-with-defender-for-iot-cli-commands.md#privileged-user-access-for-ot-monitoring).
+
+1. Run:
+
+ ```bash
+ cyberx-management-tunnel-disable
+ ```
+
+**To access proxy tunneling log files**:
+
+Proxy tunneling log files are located in the following locations:
+
+- **On the on-premises management console**: */var/log/apache2.log*
+- **On the OT sensors**: */var/cyberx/logs/tunnel.log*
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Activate and set up an on-premises management console](activate-deploy-management.md)
+
+> [!div class="step-by-step"]
+> [Create OT sites and zones on an on-premises management console ┬╗](sites-and-zones-on-premises.md)
defender-for-iot Create Learned Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/create-learned-baseline.md
+
+ Title: Create a learned baseline of OT traffic - Microsoft Defender for IoT
+description: Learn about how to create a baseline of learned traffic on your OT sensor.
Last updated : 01/22/2023+++
+# Create a learned baseline of OT alerts
+
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to create a baseline of learned traffic on your OT sensor.
++
+## Understand learning mode
+
+An OT network sensor starts monitoring your network automatically after your [first sign-in](activate-deploy-sensor.md#sign-in-to-your-ot-sensor). Network devices start appearing in your device inventory, and [alerts](../alerts.md) are triggered for any security or operational incidents that occur in your network.
+
+Initially, this activity happens in *learning* mode, which instructs your OT sensor to learn your network's usual activity, including the devices and protocols in your network, and the regular file transfers that occur between specific devices. Any regularly detected activity becomes your network's baseline traffic.
++
+> [!TIP]
+> Use your time in learning mode to triage your alerts and *Learn* those that you want to mark as authorized, expected activity. Learned traffic doesn't generate new alerts the next time the same traffic is detected.
+>
+> After learning mode is turned off, any activity that differs from your baseline data will trigger an alert.
+
+For more information, see [Microsoft Defender for IoT alerts](../alerts.md).
+
+### Learn mode timeline
+
+Creating your baseline of OT alerts can take anywhere from a few days to several weeks, depending on your network size and complexity. Learning mode automatically turns off when the sensor detects a decrease in newly detected traffic, which is typically between 2-6 weeks after deployment.
+
+[Turn off learning mode manually before then](../how-to-manage-individual-sensors.md#turn-off-learning-mode-manually) if you feel that the current alerts accurately reflect your network activity.
+
+## Prerequisites
+
+You can perform the procedures in this article from the Azure portal, an OT sensor, or an on-premises management console.
+
+Before you start, make sure that you have:
+
+- An OT sensor [installed](install-software-ot-sensor.md) and [activated](activate-deploy-sensor.md), with alerts being triggered by detected traffic.
+
+- Access to your OT sensor as **Security Analyst** or **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md).
+
+## Triage alerts
+
+Triage alerts towards the end of your deployment to create an initial baseline for your network activity.
+
+1. Sign into your OT sensor and select the **Alerts** page.
+
+1. Use sorting and grouping options to view your most critical alerts first. Review each alert to update statuses and learn alerts for OT authorized traffic.
+
+For more information, see [View and manage alerts on your OT sensor](../how-to-view-alerts.md).
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Verify and update your detected device inventory](update-device-inventory.md)
+
+After learning mode is turned off, you've moved from *learning* mode to *operation* mode. Continue with any of the following:
+
+- [Visualize Microsoft Defender for IoT data with Azure Monitor workbooks](../workbooks.md)
+- [View and manage alerts from the Azure portal](../how-to-manage-cloud-alerts.md)
+- [Manage your device inventory from the Azure portal](../how-to-manage-device-inventory-for-organizations.md)
+
+Integrate Defender for IoT data with Microsoft Sentinel to unify your SOC team's security monitoring. For more information, see:
+
+- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../iot-solution.md)
+- [Tutorial: Investigate and detect threats for IoT devices](../iot-advanced-threat-monitoring.md)
defender-for-iot Create Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/create-ssl-certificates.md
+
+ Title: Create SSL/TLS certificates for OT appliances - Microsoft Defender for IoT
+description: Learn how to create SSL/TLS certificates for use with Microsoft Defender for IOT OT sensors and on-premises management consoles.
Last updated : 01/17/2023+++
+# Create SSL/TLS certificates for OT appliances
+
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to create CA-signed certificates to use with Defender for IoT on-premises appliances, including both OT sensors and [on-premises management consoles](air-gapped-deploy.md).
++
+Each certificate authority (CA)-signed certificate must have both a `.key` file and a `.crt` file, which are uploaded to Defender for IoT appliances after the first sign-in. While some organizations may also require a `.pem` file, a `.pem` file isn't required for Defender for IoT.
+
+> [!IMPORTANT]
+> You must create a unique certificate for each Defender for IoT appliance, where each certificate meets required criteria.
+
+## Prerequisites
+
+To perform the procedures described in this article, make sure that you have a security, PKI or certificate specialist available to oversee the certificate creation.
+
+Make sure that you've also familiarized yourself with [SSL/TLS certificate requirements for Defender for IoT](../best-practices/certificate-requirements.md).
+
+## Create a CA-signed SSL/TLS certificate
+
+We recommend that you always use CA-signed certificates on production environments, and only use [self-signed certificates](#self-signed-certificates) on testing environments.
+
+Use a certificate management platform, such as an automated PKI management platform, to create a certificate that meets Defender for IoT requirements.
+
+If you don't have an application that can automatically create certificates, consult a security, PKI, or other qualified certificate lead for help. You can also convert existing certificate files if you don't want to create new ones.
+
+Make sure to create a unique certificate for each Defender for IoT appliance, where each certificate meets required [parameter criteria](../best-practices/certificate-requirements.md).
+
+**For example**:
+
+1. Open the downloaded certificate file and select the **Details** tab > **Copy to file** to run the **Certificate Export Wizard**.
+
+1. In the **Certificate Export Wizard**, select **Next** > **DER encoded binary X.509 (.CER)** > and then select **Next** again.
+
+1. In the **File to Export** screen, select **Browse**, choose a location to store the certificate, and then select **Next**.
+
+1. Select **Finish** to export the certificate.
+
+> [!NOTE]
+> You may need to convert existing files types to supported types.
+
+Verify that the certificate meets [certificate file requirements](../best-practices/certificate-requirements.md#crt-file-requirements), and then [test the certificate](#test-your-ssltls-certificates) file you created when you're done.
+
+If you aren't using certificate validation, remove the CRL URL reference in the certificate. For more information, see [certificate file requirements](../best-practices/certificate-requirements.md#crt-file-requirements).
+
+> [!TIP]
+> (Optional) Create a certificate chain, which is a `.pem` file that contains the certificates of all the certificate authorities in the chain of trust that led to your certificate.
+
+## Verify CRL server access
+
+If your organization validates certificates, your Defender for IoT appliances must be able to access the CRL server defined by the certificate. By default, certificates access the CRL server URL via HTTP port 80. However, some organizational security policies block access to this port.
+
+If your appliances can't access your CRL server on port 80, you can use one of the following workarounds:
+
+- **Define another URL and port in the certificate**:
+
+ - The URL you define must be configured as `http: //` and not `https://`
+ - Make sure that the destination CRL server can listen on the port you define
+
+- **Use a proxy server that can access the CRL on port 80**
+
+ For more information, see [Forward OT alert information].
+
+If validation fails, communication between the relevant components is halted and a validation error is presented in the console.
+
+## Import the SSL/TLS certificate to a trusted store
+
+After creating your certificate, import it to a trusted storage location. For example:
+
+1. Open the security certificate file and, in the **General** tab, select **Install Certificate** to start the **Certificate Import Wizard**.
+
+1. In **Store Location**, select **Local Machine**, then select **Next**.
+
+1. If a **User Allow Control** prompt appears, select **Yes** to allow the app to make changes to your device.
+
+1. In the **Certificate Store** screen, select **Automatically select the certificate store based on the type of certificate**, then select **Next**.
+
+1. Select **Place all certificates in the following store**, then **Browse**, and then select the **Trusted Root Certification Authorities** store. When you're done, select **Next**. For example:
+
+ :::image type="content" source="../media/how-to-deploy-certificates/certificate-store-trusted-root.png" alt-text="Screenshot of the certificate store screen where you can browse to the trusted root folder." lightbox="../media/how-to-activate-and-set-up-your-sensor/certificate-store-trusted-root.png":::
+
+1. Select **Finish** to complete the import.
+
+## Test your SSL/TLS certificates
+
+Use the following procedures to test certificates before deploying them to your Defender for IoT appliances.
+
+### Check your certificate against a sample
+
+Use the following sample certificate to compare to the certificate you've created, making sure that the same fields exist in the same order.
+
+``` Sample SSL certificate
+Bag Attributes: <No Attributes>
+subject=C = US, S = Illinois, L = Springfield, O = Contoso Ltd, OU= Contoso Labs, CN= sensor.contoso.com, E
+= support@contoso.com
+issuer C=US, S = Illinois, L = Springfield, O = Contoso Ltd, OU= Contoso Labs, CN= Cert-ssl-root-da2e22f7-24af-4398-be51-
+e4e11f006383, E = support@contoso.com
+--BEGIN CERTIFICATE--
+MIIESDCCAZCgAwIBAgIIEZK00815Dp4wDQYJKoZIhvcNAQELBQAwgaQxCzAJBgNV
+BAYTAIVTMREwDwYDVQQIDAhJbGxpbm9pczEUMBIGA1UEBwwLU3ByaW5nZmllbGQx
+FDASBgNVBAoMCONvbnRvc28gTHRKMRUWEwYDVQQLDAXDb250b3NvIExhYnMxGzAZ
+BgNVBAMMEnNlbnNvci5jb250b3NvLmNvbTEIMCAGCSqGSIb3DQEJARYTc3VwcG9y
+dEBjb250b3NvLmNvbTAeFw0yMDEyMTcxODQwMzhaFw0yMjEyMTcxODQwMzhaMIGK
+MQswCQYDVQQGEwJVUzERMA8GA1UECAwISWxsaW5vaXMxFDASBgNVBAcMC1Nwcmlu
+Z2ZpZWxkMRQwEgYDVQQKDAtDb250b3NvIEX0ZDEVMBMGA1UECwwMQ29udG9zbyBM
+YWJzMRswGQYDVQQDDBJzZW5zb3luY29udG9zby5jb20xljAgBgkqhkiG9w0BCQEW
+E3N1cHBvcnRAY29udG9zby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
+AoIBAQDRGXBNJSGJTfP/K5ThK8vGOPzh/N8AjFtLvQiiSfkJ4cxU/6d1hNFEMRYG
+GU+jY1Vknr0|A2nq7qPB1BVenW3 MwsuJZe Floo123rC5ekzZ7oe85Bww6+6eRbAT
+WyqpvGVVpfcsloDznBzfp5UM9SVI5UEybllod31MRR/LQUEIKLWILHLW0eR5pcLW
+pPLtOW7wsK60u+X3tqFo1AjzsNbXbEZ5pnVpCMqURKSNmxYpcrjnVCzyQA0C0eyq
+GXePs9PL5DXfHy1x4WBFTd98X83 pmh/vyydFtA+F/imUKMJ8iuOEWUtuDsaVSX0X
+kwv2+emz8CMDLsbWvUmo8Sg0OwfzAgMBAAGjfDB6MB0GA1UdDgQWBBQ27hu11E/w
+21Nx3dwjp0keRPuTsTAfBgNVHSMEGDAWgBQ27hu1lE/w21Nx3dwjp0keRPUTSTAM
+BgNVHRMEBTADAQH/MAsGA1UdDwQEAwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAgYI
+KwYBBQUHAwEwDQYJKoZIhvcNAQELBQADggEBADLsn1ZXYsbGJLLzsGegYv7jmmLh
+nfBFQqucORSQ8tqb2CHFME7LnAMfzFGpYYV0h1RAR+1ZL1DVtm+IKGHdU9GLnuyv
+9x9hu7R4yBh3K99ILjX9H+KACvfDUehxR/ljvthoOZLalsqZIPnRD/ri/UtbpWtB
+cfvmYleYA/zq3xdk4vfOI0YTOW11qjNuBIHh0d5S5sn+VhhjHL/s3MFaScWOQU3G
+9ju6mQSo0R1F989aWd+44+8WhtOEjxBvr+17CLqHsmbCmqBI7qVnj5dHvkh0Bplw
+zhJp150DfUzXY+2sV7Uqnel9aEU2Hlc/63EnaoSrxx6TEYYT/rPKSYL+++8=
+--END CERTIFICATE--
+```
+
+### Test certificates without a `.csr` or private key file
+
+If you want to check the information within the certificate `.csr` file or private key file, use the following CLI commands:
+
+- **Check a Certificate Signing Request (CSR)**: Run `openssl req -text -noout -verify -in CSR.csr`
+- **Check a private key**: Run `openssl rsa -in privateKey.key -check`
+- **Check a certificate**: Run `openssl x509 -in certificate.crt -text -noout`
+
+If these tests fail, review [certificate file requirements](../best-practices/certificate-requirements.md#crt-file-requirements) to verify that your file parameters are accurate, or consult your certificate specialist.
+
+### Validate the certificate's common name
+
+1. To view the certificate's common name, open the certificate file and select the Details tab, and then select the **Subject** field.
+
+ The certificate's common name appears next to **CN**.
+
+1. Sign-in to your sensor console without a secure connection. In the **Your connection isn't private** warning screen, you might see a **NET::ERR_CERT_COMMON_NAME_INVALID** error message.
+
+1. Select the error message to expand it, and then copy the string next to **Subject**. For example:
+
+ :::image type="content" source="../media/how-to-deploy-certificates/connection-is-not-private-subject.png" alt-text="Screenshot of the connection isn't private screen with the details expanded." lightbox="../media/how-to-deploy-certificates/connection-is-not-private-subject.png":::
+
+ The subject string should match the **CN** string in the security certificate's details.
+
+1. In your local file explorer, browse to the hosts file, such as at **This PC > Local Disk (C:) > Windows > System32 > drivers > etc**, and open the **hosts** file.
+
+1. In the hosts file, add in a line at the end of document with the sensor's IP address and the SSL certificate's common name that you copied in the previous steps. When you're done, save the changes. For example:
+
+ :::image type="content" source="../media/how-to-deploy-certificates/hosts-file.png" alt-text="Screenshot of the hosts file." lightbox="../media/how-to-activate-and-set-up-your-sensor/hosts-file.png":::
+
+### Self-signed certificates
+
+Self-signed certificates are available for use in testing environments after installing Defender for IoT OT monitoring software. For more information, see:
+
+- [Create and deploy self-signed certificates on OT sensors](../how-to-manage-individual-sensors.md#manage-ssltls-certificates)
+- [Create and deploy self-signed certificates on on-premises management consoles](../how-to-manage-the-on-premises-management-console.md#manage-ssltls-certificates)
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Prepare an OT site deployment](../best-practices/plan-prepare-deploy.md)
defender-for-iot Install Software On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-on-premises-management-console.md
# Install Microsoft Defender for IoT on-premises management console software
+This article is one in a series of articles describing the [deployment path](air-gapped-deploy.md) for a Microsoft Defender for IoT on-premises management console for air-gapped OT sensors.
++ Use the procedures in this article when installing Microsoft Defender for IoT software on an on-premises management console. You might be reinstalling software on a [pre-configured appliance](../ot-pre-configured-appliances.md), or you may be installing software on your own appliance.
-## Prerequisites
-Before installing Microsoft Defender for IoT, make sure that you have:
+## Prerequisites
-- [Traffic mirroring configured in your network](../best-practices/traffic-mirroring-methods.md)-- An [OT plan in Defender for IoT](../how-to-manage-subscriptions.md) on your Azure subscription-- An OT sensor [onboarded to Defender for IoT](../onboard-sensors.md) in the Azure portal-- [OT monitoring software installed on an OT network sensor](install-software-ot-sensor.md)
+Before installing Defender for IoT software on your on-premises management console, make sure that you have:
-Each appliance type also comes with its own set of instructions that are required before installing Defender for IoT software. Make sure that you've completed any specific procedures required for your appliance before installing Defender for IoT software.
+- An [OT plan in Defender for IoT](../getting-started.md) on your Azure subscription.
-For more information, see:
+- Access to the Azure portal as a [Security Reader](../../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../../role-based-access-control/built-in-roles.md#owner) user
-- The [OT monitoring appliance catalog](../appliance-catalog/index.yml)-- [Which appliances do I need?](../ot-appliance-sizing.md)-- [OT monitoring with virtual appliances](../ot-virtual-appliances.md)
+- A [physical or virtual appliance prepared](prepare-management-appliance.md)for your on-premises management console.
## Download software files from the Azure portal
Select **Getting started** > **On-premises management console** and select the s
> [!IMPORTANT] > If you're updating software from a previous version, alternately use the options from the **Sites and sensors** > **Sensor update (Preview)** menu. Use this option especially when you're updating your on-premises management console together with connected OT sensors. For more information, see [Update Defender for IoT OT monitoring software](../update-ot-software.md). + ## Install on-premises management console software This procedure describes how to install OT management software on an on-premises management console, for a physical or virtual appliance. The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+> [!NOTE]
+> Towards the end of this process you will be presented with the usernames and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
+ **To install the software**: 1. Mount the ISO file onto your hardware appliance or VM using one of the following options: - **Physical media** ΓÇô burn the ISO file to your external storage, and then boot from the media.
- - DVDs: First burn the software to the DVD as an image
- - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
+ - DVDs: First burn the software to the DVD as an image
+ - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
Your physical media must have a minimum of 4-GB storage. - **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
-1. Select your preferred language for the installation process.
+1. Select your preferred language for the installation process. For example:
:::image type="content" source="../media/tutorial-install-components/on-prem-language-select.png" alt-text="Screenshot of selecting your preferred language for the installation process.":::
-1. Select **MANAGEMENT-RELEASE-\<version\>\<deployment type\>**.
+1. From the options displayed, select the management release you want to install based on the hardware profile you're using.
- :::image type="content" source="../media/tutorial-install-components/on-prem-install-screen.png" alt-text="Screenshot of selecting your management release version.":::
+1. Define the following network properties as prompted:
-1. In the Installation Wizard, define the network properties:
+ - For the **Configure management network interface** prompt: For Dell appliances, enter `eth0` and `eth1`. For HP appliances, enter `enu1` and `enu2`, or `possible value`.
- :::image type="content" source="../media/tutorial-install-components/on-prem-first-steps-install.png" alt-text="Screenshot that shows the appliance profile.":::
+ - For the **Configure management network IP address**, **Configure subnet mask**, **Configure DNS**, and **Configure default gateway IP address** prompts, enter the relevant values for each item.
- | Parameter | Configuration |
- |--|--|
- | **configure management network interface** | For Dell: **eth0, eth1** <br /> For HP: **enu1, enu2** <br> Or <br />**possible value** |
- | **configure management network IP address** | Enter an IP address |
- | **configure subnet mask** | Enter an IP address|
- | **configure DNS** | Enter an IP address |
- | **configure default gateway IP address** | Enter an IP address|
+1. **(Optional)** To install a secondary Network Interface Card (NIC), define a hardware profile, and network properties as prompted.
-1. **(Optional)** If you would like to install a secondary Network Interface Card (NIC), define the following appliance profile, and network properties:
-
- | Parameter | Configuration |
- |--|--|
- | **configure sensor monitoring interface** (Optional) | **eth1** or **possible value** |
- | **configure an IP address for the sensor monitoring interface** | Enter an IP address |
- | **configure a subnet mask for the sensor monitoring interface** | Enter an IP address |
+ For the **Configure sensor monitoring interface**, enter `eth1` or `possible value`. For other prompts, enter the relevant values for each item.
For example:
-
+ :::image type="content" source="../media/tutorial-install-components/on-prem-secondary-nic-install.png" alt-text="Screenshot that shows the Secondary NIC install questions.":::
- If you choose not to install the secondary NIC now, you can [do so at a later time](#add-a-secondary-nic-after-installation-optional).
+ If you choose not to install the secondary NIC now, you can [do so at a later time](../how-to-manage-the-on-premises-management-console.md#add-a-secondary-nic-after-installation).
-1. Accept the settings and continue by typing `Y`.
+1. Accept the settings and continue by entering `Y`.
-1. After about 10 minutes, the two sets of credentials appear. For example:
+1. <a name="users"></a>After about 10 minutes, the two sets of credentials appear. For example:
:::image type="content" source="../media/tutorial-install-components/credentials-screen.png" alt-text="Screenshot of the credentials that appear that must be copied as they won't be presented again.":::
The installation process takes about 20 minutes. After the installation, the sys
1. Select **Enter** to continue.
-### Add a secondary NIC after installation (optional)
+## Configure network adapters for a VM deployment
-You can enhance security to your on-premises management console by adding a secondary NIC dedicated for attached sensors within an IP address range. When you use a secondary NIC, the first is dedicated for end-users, and the secondary supports the configuration of a gateway for routed networks.
+After deploying an on-premises management console sensor on a [virtual appliance](../ot-virtual-appliances.md), configure at least one network adapter on your VM to connect to both the on-premises management console UI and to any connected OT sensors. If you've added a secondary NIC to separate between the two connections, configure two separate network adapters.
+**On your virtual machine**:
-Both NICs will support the user interface (UI). If you choose not to deploy a secondary NIC, all of the features will be available through the primary NIC.
+1. Open your VM settings for editing.
+1. Together with the other hardware defined for your VM, such as memory, CPUs, and hard disk, add the following network adapters:
-This procedure describes how to add a secondary NIC if you've already installed your on-premises management console.
+ |Adapters |Description |
+ |||
+ |**Single network adapter** | To use a single network adapter, add **Network adapter 1** to connect to the on-premises management console UI and any connected OT sensors. |
+ |**Secondary NIC** | To use a secondary NIC in addition to your main network adapter, add: <br> <br> - **Network adapter 1** to connect to the on-premises management console UI <br> - **Network adapter 2**, to connect to connected OT sensors |
-**To add a secondary NIC**:
-
-1. Use the network reconfigure command:
-
- ```bash
- sudo cyberx-management-network-reconfigure
- ```
-
-1. Enter the following responses to the following questions:
-
- :::image type="content" source="../media/tutorial-install-components/network-reconfig-command.png" alt-text="Screenshot of the required answers to configure your appliance. ":::
-
- | Parameters | Response to enter |
- |--|--|
- | **Management Network IP address** | `N` |
- | **Subnet mask** | `N` |
- | **DNS** | `N` |
- | **Default gateway IP Address** | `N` |
- | **Sensor monitoring interface** <br>Optional. Relevant when sensors are on a different network segment.| `Y`, and select a possible value |
- | **An IP address for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
- | **A subnet mask for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors|
- | **Hostname** | Enter the hostname |
+For more information, see:
-1. Review all choices and enter `Y` to accept the changes. The system reboots.
+- Your virtual machine software documentation
+- [On-premises management console (VMware ESXi)](../appliance-catalog/virtual-management-vmware.md)
+- [On-premises management console (Microsoft Hyper-V hypervisor)](../appliance-catalog/virtual-management-hyper-v.md)
+- [Networking requirements](../networking-requirements.md)
-### Find a port on your appliance
+## Find a port on your appliance
-If you're having trouble locating the physical port on your appliance, you can use the following command to find your port:
+If you're having trouble locating the physical port on your appliance, sign into the on-premises management console and run the following command to find your port:
```bash sudo ethtool -p <port value> <time-in-seconds> ```
-This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes, allowing you to find the port on the back of your appliance.
-
+This command causes the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes, allowing you to find the port on the back of your appliance.
## Next steps
-> [!div class="nextstepaction"]
-> [Validate after installing software](post-install-validation-ot-software.md)
+For more information, see [Troubleshoot the on-premises management console](../how-to-troubleshoot-on-premises-management-console.md).
+
+> [!div class="step-by-step"]
+> [« Prepare an on-premises management console appliance](prepare-management-appliance.md)
-> [!div class="nextstepaction"]
-> [Troubleshooting](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+> [!div class="step-by-step"]
+> [Activate and set up an on-premises management console ┬╗](activate-deploy-management.md)
defender-for-iot Install Software Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-ot-sensor.md
Title: Install OT network monitoring software on OT sensors - Microsoft Defender for IoT
-description: Learn how to install agentless monitoring software for an OT sensor for Microsoft Defender for IoT. Use this article when reinstalling software on a pre-configured appliance, or if you've chosen to install software on your own appliances.
+description: Learn how to install agentless monitoring software for an OT sensor for Microsoft Defender for IoT. Use this article if you've chosen to install software on your own appliances or when reinstalling software on a pre-configured appliance.
Last updated 12/13/2022 # Install OT monitoring software on OT sensors
-Use the procedures in this article when installing Microsoft Defender for IoT software on OT network sensors. You might be reinstalling software on a [pre-configured appliance](../ot-pre-configured-appliances.md), or you may be installing software on your own appliance.
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to install Defender for IoT software on OT sensors.
++
+Use the procedures in this article when installing Microsoft Defender for IoT software on your own appliances. You might be reinstalling software on a [pre-configured appliance](../ot-pre-configured-appliances.md), or you may be installing software on your own appliance.
+
+If you're using pre-configured appliances, skip this step and continue directly with [activating and setting up your OT network sensor](activate-deploy-sensor.md) instead.
+ ## Prerequisites Before installing Microsoft Defender for IoT, make sure that you have: -- [Traffic mirroring configured in your network](../best-practices/traffic-mirroring-methods.md)-- An [OT plan in Defender for IoT](../how-to-manage-subscriptions.md) on your Azure subscription-- An OT sensor [onboarded to Defender for IoT](../onboard-sensors.md) in the Azure portal
+- A [plan](../best-practices/plan-prepare-deploy.md) for your OT site deployment with Defender for IoT, including the appliance you'll be using for your OT sensor.
-Each appliance type also comes with its own set of instructions that are required before installing Defender for IoT software. Make sure that you've completed any specific procedures required for your appliance before installing Defender for IoT software.
+- Access to the Azure portal as a [Security Reader](../../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../../role-based-access-control/built-in-roles.md#owner) user.
-For more information, see:
+- Performed extra procedures per appliance type. Each appliance type also comes with its own set of instructions that are required before installing Defender for IoT software.
-- The [OT monitoring appliance catalog](../appliance-catalog/index.yml)-- [Which appliances do I need?](../ot-appliance-sizing.md)-- [OT monitoring with virtual appliances](../ot-virtual-appliances.md)
+ Make sure that you've completed any specific procedures required for your appliance before installing Defender for IoT software.
+
+ For more information, see:
+
+ - The [OT monitoring appliance catalog](../appliance-catalog/index.yml)
+ - [Which appliances do I need?](../ot-appliance-sizing.md)
+ - [OT monitoring with virtual appliances](../ot-virtual-appliances.md)
+
+This step is performed by your deployment teams.
## Download software files from the Azure portal Download the OT sensor software from Defender for IoT in the Azure portal.
-Select **Getting started** > **Sensor** and select the software version you want to download.
+In Defender for IoT on the Azure portal, select **Getting started** > **Sensor**, and then select the software version you want to download.
> [!IMPORTANT] > If you're updating software from a previous version, use the options from the **Sites and sensors** > **Sensor update** menu. For more information, see [Update Defender for IoT OT monitoring software](../update-ot-software.md). ## Install Defender or IoT software on OT sensors
-This procedure describes how to install OT monitoring software on a sensor.
+This procedure describes how to install OT monitoring software on an OT network sensor.
-> [!Note]
+> [!NOTE]
> Towards the end of this process you will be presented with the usernames and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
+**To install your software**:
+ 1. Mount the ISO file onto your hardware appliance or VM using one of the following options: - **Physical media** ΓÇô burn the ISO file to your external storage, and then boot from the media.
- - DVDs: First burn the software to the DVD as an image
- - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
+ - DVDs: First burn the software to the DVD as an image
+ - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
Your physical media must have a minimum of 4-GB storage. - **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
-1. When the installation boots, you're first prompted to select the hardware profile you want to install.
+1. When the installation boots, you're first prompted to select the hardware profile you want to use. For example:
:::image type="content" source="../media/tutorial-install-components/sensor-architecture.png" alt-text="Screenshot of the sensor's hardware profile options." lightbox="../media/tutorial-install-components/sensor-architecture.png"::: For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
- System files are installed, the sensor reboots, and then sensor files are installed. This process can take a few minutes.
+ After you've selected the hardware profile, the following steps occur, and can take a few minutes:
- When the installation steps are complete, the Ubuntu **Package configuration** screen is displayed, with the `Configuring iot-sensor` wizard, showing a prompt to select your monitor interfaces.
+ - System files are installed
+ - The sensor appliance reboots
+ - Sensor files are installed
- In this wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
+ When the installation steps are complete, the Ubuntu **Package configuration** screen is displayed, with the `Configuring iot-sensor` wizard, showing a prompt to select your monitor interfaces.
-1. In the `Select monitor interfaces` screen, select the interfaces you want to monitor.
+ In the `Configuring iot-sensor` wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
- > [!IMPORTANT]
- > Make sure that you select only interfaces that are connected.
- > If you select interfaces that are enabled but not connected, the sensor will show a *No traffic monitored* health notification in the Azure portal. If you connect more traffic sources after installation and want to monitor them with Defender for IoT, you can add them via the CLI.
+1. In the wizard's `Select monitor interfaces` screen, select the interfaces you want to monitor.
By default, `eno1` is reserved for the management interface and we recommend that you leave this option unselected.
This procedure describes how to install OT monitoring software on a sensor.
:::image type="content" source="../media/tutorial-install-components/monitor-interface.png" alt-text="Screenshot of the select monitor interface screen.":::
+ > [!IMPORTANT]
+ > Make sure that you select only interfaces that are connected.
+ >
+ > If you select interfaces that are enabled but not connected, the sensor will show a *No traffic monitored* health notification in the Azure portal. If you connect more traffic sources after installation and want to monitor them with Defender for IoT, you can add them via the [CLI](../references-work-with-defender-for-iot-cli-commands.md).
+ 1. In the `Select erspan monitor interfaces` screen, select any ERSPAN monitoring ports that you have. The wizard lists available interfaces, even if you don't have any ERSPAN monitoring ports in your system. If you have no ERSPAN monitoring ports, leave all options unselected. For example:
This procedure describes how to install OT monitoring software on a sensor.
For more information, see [Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (version 10.x)](../how-to-connect-sensor-by-proxy.md). - 1. <a name=credentials></a>The installation process starts running and then shows the credentials screen. For example: :::image type="content" source="../media/tutorial-install-components/login-information.png" alt-text="Screenshot of the final screen of the installation with usernames, and passwords.":::
This procedure describes how to install OT monitoring software on a sensor.
:::image type="content" source="../media/tutorial-install-components/install-complete.png" alt-text="Screenshot of the sign-in confirmation.":::
-Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](../how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor).
+## Configure network adapters for a VM deployment
+After deploying an OT sensor on a [virtual appliance](../ot-virtual-appliances.md), configure at least two network adapters on your VM: one to connect to the Azure portal, and another to connect to traffic mirroring ports.
+
+**On your virtual machine**:
+
+1. Open your VM settings for editing.
+
+1. Together with the other hardware defined for your VM, such as memory, CPUs, and hard disk, add the following network adapters:
+
+ - **Network adapter 1**, to connect to the Azure portal for cloud management.
+ - **Network adapter 2**, to connect to a traffic mirroring port that's configured to allow promiscuous mode traffic. If you're connecting your sensor to multiple traffic mirroring ports, make sure there's a network adapter configured for each port.
+
+For more information, see:
+
+- Your virtual machine software documentation
+- [OT network sensor VM (VMware ESXi)](../appliance-catalog/virtual-sensor-vmware.md)
+- [OT network sensor VM (Microsoft Hyper-V)](../appliance-catalog/virtual-sensor-hyper-v.md)
+- [Networking requirements](../networking-requirements.md)
+
+> [!NOTE]
+> If you're working with an air-gapped sensor and are [deploying an on-premises management console](air-gapped-deploy.md), configure **Network adapter 1** to connect to the on-premises management console UI instead of the Azure portal.
+>
## Next steps
-> [!div class="nextstepaction"]
-> [Validate after installing software](post-install-validation-ot-software.md)
+For more information, see [Troubleshoot the sensor](../how-to-troubleshoot-sensor.md).
-> [!div class="nextstepaction"]
-> [Install software on an on-premises management console](install-software-on-premises-management-console.md)
+> [!div class="step-by-step"]
+> [« Provision OT sensors for cloud management](provision-cloud-management.md)
-> [!div class="nextstepaction"]
-> [Troubleshooting](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+> [!div class="step-by-step"]
+> [Validate after installing software ┬╗](post-install-validation-ot-software.md)
defender-for-iot Ot Deploy Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/ot-deploy-path.md
+
+ Title: Deploy Defender for IoT for OT monitoring - Microsoft Defender for IoT
+description: Learn about the steps involved in deploying a Microsoft Defender for IoT system for OT monitoring.
+ Last updated : 02/13/2023++
+# Deploy Defender for IoT for OT monitoring
+
+This article describes the high-level steps required to deploy Defender for IoT for OT monitoring. Learn more about each deployment step in the sections below, including relevant cross-references for more details.
+
+The following image shows the phases in an end-to-end OT monitoring deployment path, together with the team responsible for each phase.
+
+While teams and job titles differ across different organizations, all Defender for IoT deployments require communication between the people responsible for the different areas of your network and infrastructure.
++
+> [!TIP]
+> Each step in the process can take a different amount of time. For example, downloading an OT sensor activation file may take five minutes, while configuring traffic monitoring may take days or even weeks, depending on your organization's processes.
+>
+> We recommend that you start the process for each step without waiting for it to be completed before moving on to the next step. Make sure to continue following up on any steps still in process to ensure their completion.
+
+## Prerequisites
+
+Before you start planning your OT monitoring deployment, make sure that you have an Azure subscription and an OT plan onboarded Defender for IoT.
+
+For more information, see [Add an OT plan to your Azure subscription](../getting-started.md).
+
+## Planning and preparing
+
+The following image shows the steps included in the planning and preparing phase. Planning and preparing steps are handled by your architecture teams.
++
+#### Plan your OT monitoring system
+
+Plan basic details about your monitoring system, such as:
+
+- **Sites and zones**: Decide how you'll segment the network you want to monitor using *sites* and *zones* that can represent locations all around the world.
+
+- **Sensor management**: Decide on whether you'll be using cloud-connected or air-gapped, locally-managed OT sensors, or a hybrid system of both. If you're using cloud-connected sensors, select a connection method, such as connecting directly or via a proxy.
+
+- **Users and roles**: List of the types of users you'll need on each sensor, and the roles that they'll need for each activity.
+
+For more information, see [Plan your OT monitoring system with Defender for IoT](../best-practices/plan-corporate-monitoring.md).
+
+> [!TIP]
+> If you're using several locally-managed sensors, you may also want to deploy an [on-premises management console](air-gapped-deploy.md) for central visibility and management.
+>
+#### Prepare for an OT site deployment
+
+Define additional details for each site planned in your system, including:
+
+- **A network diagram**. Identify all of the devices you want to monitor and create a well-defined list of subnets. After you've deployed your sensors, use this list to verify that all the subnets you want to monitor are covered by Defender for IoT.
+
+- **A list of sensors**: Use the list of traffic, subnets, and devices you want to monitor to create a list of the OT sensors you'll need and where they'll be placed in your network.
+
+- **Traffic mirroring methods**: Choose a traffic mirroring method for each OT sensor, such as a SPAN port or TAP.
+
+- **Appliances**: Prepare a deployment workstation and any hardware or VM appliances you'll be using for each of the OT sensors you've planned. If you're using pre-configured appliances, make sure to order them.
+
+For more information, see [Prepare an OT site deployment](../best-practices/plan-prepare-deploy.md).
+
+## Onboard sensors to Azure
+
+The following image shows the step included in the onboard sensors phase. Sensors are onboarded to Azure by your deployment teams.
++
+#### Onboard OT sensors on the Azure portal
+
+Onboard as many OT sensors to Defender for IoT as you've planned. Make sure to download the activation files provided for each OT sensor and save them in a location that will be accessible from your sensor machines.
+
+For more information, see [Onboard OT sensors to Defender for IoT](../onboard-sensors.md).
+
+## Site networking setup
+
+The following image shows the steps included in the site networking setup phrase. Site networking steps are handled by your connectivity teams.
++
+#### Configure traffic mirroring in your network
+
+Use the plans you'd created [earlier](#prepare-for-an-ot-site-deployment) to configure traffic mirroring at the places in your network where you'll be deploying OT sensors and mirroring traffic to Defender for IoT.
+
+For more information, see:
+
+- [Configure mirroring with a switch SPAN port](../traffic-mirroring/configure-mirror-span.md)
+- [Configure traffic mirroring with a Remote SPAN (RSPAN) port](../traffic-mirroring/configure-mirror-rspan.md)
+- [Configure active or passive aggregation (TAP)](../best-practices/traffic-mirroring-methods.md#active-or-passive-aggregation-tap)
+- [Configure traffic mirroring with an encapsulated remote switched port analyzer (ERSPAN)](../traffic-mirroring/configure-mirror-erspan.md)
+- [Configure traffic mirroring with a ESXi vSwitch](../traffic-mirroring/configure-mirror-esxi.md)
+- [Configure traffic mirroring with a Hyper-V vSwitch](../traffic-mirroring/configure-mirror-hyper-v.md)
+
+#### Provision for cloud management
+
+Configure any firewall rules to ensure that your OT sensor appliances will be able to access Defender for IoT on the Azure cloud. If you're planning to connect via a proxy, you'll configure those settings only after installing your sensor.
+
+Skip this step for any OT sensor that is planned to be air-gapped and managed locally, either directly on the sensor console, or via an [on-premises management console](air-gapped-deploy.md).
+
+For more information, see [Provision OT sensors for cloud management](provision-cloud-management.md).
+
+## Deploy your OT sensors
+
+The following image shows the steps included in the sensor deployment phase. OT sensors are deployed and activated by your deployment team.
++
+#### Install your OT sensors
+
+If you're installing Defender for IoT software on your own appliances, download installation software from the Azure portal and install it on your OT sensor appliance.
+
+After installing your OT sensor software, run several checks to validate the installation and configuration.
+
+For more information, see:
+
+- [Install OT monitoring software on OT sensors](install-software-ot-sensor.md)
+- [Validate an OT sensor software installation](post-install-validation-ot-software.md)
+
+Skip these steps if you're purchasing [pre-configured appliances](../ot-pre-configured-appliances.md).
+
+#### Activate your OT sensors and initial setup
+
+Use an initial setup wizard to confirm network settings, activate the sensor, and apply SSH/TLS certificates.
+
+For more information, see [Activate and set up your OT network sensor](activate-deploy-sensor.md).
+
+#### Configure proxy connections
+
+If you've decided to use a proxy to connect your sensors to the cloud, set up your proxy and configure settings on your sensor. For more information, see [Configure proxy settings on an OT sensor](../connect-sensors.md).
+
+Skip this step in the following situations:
+
+- For any OT sensor where you're connecting directly to Azure, without a proxy
+- For any sensor that is planned to be air-gapped and managed locally, either directly on the sensor console, or via an [on-premises management console](air-gapped-deploy.md).
+
+#### Configure optional settings
+
+We recommend that you configure an Active Directory connection for managing on-premises users on your OT sensor, and also setting up sensor health monitoring via SNMP.
+
+If you don't configure these settings during deployment, you can also return and configure them later on.
+
+For more information, see:
+
+- [Set up SNMP MIB monitoring on an OT sensor](../how-to-set-up-snmp-mib-monitoring.md)
+- [Configure an Active Directory connection](../manage-users-sensor.md#configure-an-active-directory-connection)
+
+## Calibrate and fine-tune OT monitoring
+
+The following image shows the steps involved in calibrating and fine-tuning OT monitoring with your newly deployed sensor. Calibration and fine-tuning activities are done by your deployment team.
++
+#### Control OT monitoring on your sensor
+
+By default, your OT sensor may not detect the exact networks that you want to monitor, or identify them in precisely the way you'd like to see them displayed. Use the [lists you'd created earlier](#prepare-for-an-ot-site-deployment) to verify and manually configure the subnets, customize port and VLAN names, and configure DHCP address ranges as needed.
+
+For more information, see [Control the OT traffic monitored by Microsoft Defender for IoT](../how-to-control-what-traffic-is-monitored.md).
+
+#### Verify and update your detected device inventory
+
+After your devices are fully detected, review the device inventory and modify the device details as needed. For example, you might identify duplicate device entries that can be merged, device types or other properties to modify, and more.
+
+For more information, see [Verify and update your detected device inventory](update-device-inventory.md).
+
+#### Learn OT alerts to create a network baseline
+
+The alerts triggered by your OT sensor may include several alerts that you'll want to regularly ignore, or *Learn*, as authorized traffic.
+
+Review all the alerts in your system as an initial triage. This step creates a network traffic baseline for Defender for IoT to work with moving forward.
+
+For more information, see [Create a learned baseline of OT alerts](create-learned-baseline.md).
+
+## Baseline learning ends
+
+Your OT sensors will remain in *Learning mode* for as long as new traffic is detected and you have unhandled alerts.
++
+When baseline learning ends, the OT monitoring deployment process is complete, and you'll continue on in operational mode for ongoing monitoring. In operational mode, any activity that differs from your baseline data will trigger an alert.
+
+> [!TIP]
+> [Turn off learning mode manually](../how-to-manage-individual-sensors.md#turn-off-learning-mode-manually) if you feel that the current alerts in Defender for IoT reflect your network traffic accurately, and learning mode hasn't already ended automatically.
+>
+
+## Next steps
+
+Now that you understand the OT monitoring system deployment steps, you're ready to get started!
+
+> [!div class="step-by-step"]
+> [Plan your OT monitoring system with Defender for IoT ┬╗](../best-practices/plan-corporate-monitoring.md)
defender-for-iot Post Install Validation Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/post-install-validation-ot-software.md
Title: Post-installation validation of OT network monitoring software - Microsoft Defender for IoT
+ Title: Validate an OT sensor software installation - Microsoft Defender for IoT
description: Learn how to test your system post installation of OT network monitoring software for Microsoft Defender for IoT. Use this article after you've reinstalled software on a pre-configured appliance, or if you've chosen to install software on your own appliances. Last updated 12/13/2022
-# Post-installation validation of OT network monitoring software
+# Validate an OT sensor software installation
-After you've installed OT software on your [OT sensors](install-software-ot-sensor.md) or [on-premises management console](install-software-on-premises-management-console.md), test your system to make sure that processes are running correctly. The same validation process applies to all appliance types.
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
++
+After you've installed OT software on your [OT sensors](install-software-ot-sensor.md), test your system to make sure that processes are running correctly. The same validation process applies to all appliance types.
System health validations are supported via the sensor or on-premises management console UI or CLI, and are available for both the *support* and *cyberx* users.
+If you're using pre-configured appliances, continue directly with [activating and setting up your OT network sensor](activate-deploy-sensor.md) instead.
+
+## Prerequisites
+
+The procedures in this article assume that you've just installed Defender for IoT software on an OT network sensor.
+
+For more information, see [Install OT monitoring software on OT sensors](install-software-ot-sensor.md).
+
+This step is performed by your deployment teams.
+ ## General tests After installing OT monitoring software, make sure to run the following tests:
Destination Gateway Genmask Flags Metric Ref Use Iface
> ```
-Use the `arp -a` command to verify that there is a binding between the MAC address and the IP address of the default gateway. For example:
+Use the `arp -a` command to verify that there's a binding between the MAC address and the IP address of the default gateway. For example:
``` CLI <root@xsense:/# arp -a
redis_22.2.6.27-r-c64cbca.iot_network_22.2.6.27-r-c64cbca (172.18.0.3) at 02:42:
## DNS checks Use the `cat /etc/resolv.conf` command to find the IP address that's configured for DNS traffic. For example:+ ``` CLI <root@xsense:/# cat /etc/resolv.conf search reddog.microsoft.com
https://docsupdatetracker.net/index.html.1 100%[===================>] 97.62K --.-KB/s in 0.02s
> ```
-For more information, see [Check system health](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) in our sensor and on-premises management console troubleshooting article.
+For more information, see [Check system health](../how-to-troubleshoot-sensor.md#check-system-health) in our sensor troubleshooting article.
## Next steps
-> [!div class="nextstepaction"]
-> [Troubleshoot an OT sensor or on-premises management console](../how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
->
+For more information, see [Troubleshoot the sensor](../how-to-troubleshoot-sensor.md) and [Troubleshoot the on-premises management console](../how-to-troubleshoot-on-premises-management-console.md).
+
+> [!div class="step-by-step"]
+> [« Install OT monitoring software on OT sensors](install-software-ot-sensor.md)
+
+> [!div class="step-by-step"]
+> [Activate and set up your OT network sensor ┬╗](activate-deploy-sensor.md)
defender-for-iot Prepare Management Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/prepare-management-appliance.md
+
+ Title: Prepare an on-premises management console appliance - Microsoft Defender for IoT
+description: Learn about how to prepare an on-premises management console appliance to manage air-gapped and locally managed OT sensors with Microsoft Defender for IoT.
+ Last updated : 02/22/2023+++
+# Prepare an on-premises management console appliance
+
+This article is one in a series of articles describing the [deployment path](air-gapped-deploy.md) for a Microsoft Defender for IoT on-premises management console for air-gapped OT sensors.
++
+Just as you'd [prepared an on-premises appliance](../best-practices/plan-prepare-deploy.md#prepare-on-premises-appliances) for your OT sensors, prepare an appliance for your on-premises management console.
+
+## Prepare a virtual appliance
+
+If you're using a virtual appliance, ensure that you have the relevant resources configured.
+
+For more information, see [OT monitoring with virtual appliances](../ot-virtual-appliances.md).
+
+## Prepare a physical appliance
+
+If you're using a physical appliance, ensure that you have the required hardware. You can buy [pre-configured appliances](../ot-pre-configured-appliances.md), or plan to [install software](../ot-deploy/install-software-ot-sensor.md) on your own appliances.
+
+To buy pre-configured appliances, email [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D) request your appliance.
+
+For more information, see [Which appliances do I need?](../ot-appliance-sizing.md)
+
+### Prepare ancillary hardware
+
+If you're using physical appliances, make sure that you have the following extra hardware available for each physical appliance:
+
+- A monitor and keyboard
+- Rack space
+- AC power
+- A LAN cable to connect the appliance's management port to the network switch
+- LAN cables for connecting mirror (SPAN) ports and network terminal access points (TAPs) to your appliance
+
+## Prepare CA-signed certificates
+
+While the on-premises management console is installed with a default, self-signed SSH/TLS certificate, we recommend using CA-signed certificates in production deployments.
+
+[SSH/TLS certificate requirements](../best-practices/certificate-requirements.md) are the same for on-premises management consoles as they are for OT network sensors.
+
+If you want to deploy a CA-signed certificate during initial deployment, make sure to have the certificate prepared. If you decide to deploy with the built-in, self-signed certificate, we recommend that you still deploy a CA-signed certificate in production environments later on.
+
+For more information, see:
+
+- [Create SSL/TLS certificates for OT appliances](../ot-deploy/create-ssl-certificates.md)
+- [Manage SSL/TLS certificates](../how-to-manage-the-on-premises-management-console.md#manage-ssltls-certificates)
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Defender for IoT OT deployment path](ot-deploy-path.md)
+
+> [!div class="step-by-step"]
+> [Install Microsoft Defender for IoT on-premises management console software ┬╗](install-software-on-premises-management-console.md)
defender-for-iot Provision Cloud Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/provision-cloud-management.md
+
+ Title: Provision OT sensors for cloud management
+description: Learn how to ensure that your OT sensor can connect to Azure by accessing a list of required endpoints to define in your firewalls rules.
+ Last updated : 03/20/2023++
+# Provision sensors for cloud management
+
+This article is one in a series of articles describing the [deployment path](ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to ensure that your firewall rules allow connectivity to Azure from your OT sensors.
++
+If you're working with air-gapped environment and locally-managed sensors, you can skip this step.
+
+## Prerequisites
+
+To perform the steps described in this article, you need access to the Azure portal as a [Security Reader](../../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../../role-based-access-control/built-in-roles.md#owner) user.
+
+This step is performed by your connectivity teams.
+
+## Allow connectivity to Azure
+
+This section describes how to download a list of required endpoints to define in firewall rules, ensuring that your OT sensors can connect to Azure.
+
+This procedure is also used to configure [direct connections](../architecture-connections.md#direct-connections) to Azure. If you're planning to use a proxy configuration instead, you'll [configure proxy settings](../connect-sensors.md) after installing and activating your sensor.
+
+For more information, see [Methods for connecting sensors to Azure](../architecture-connections.md).
+
+**To download required endpoint details**:
+
+1. On the Azure portal, go to Defender for IoT > **Sites and sensors**.
+
+1. Select **More actions** > **Download endpoint details**.
+
+Configure your firewall rules so that your sensor can access the cloud on port 443, to each of the listed endpoints in the downloaded list.
+
+> [!IMPORTANT]
+> Azure public IP addresses are updated weekly. If you must define firewall rules based on IP addresses, make sure to download the new [JSON file](https://www.microsoft.com/download/details.aspx?id=56519) each week and make the required changes on your site to correctly identify services running in Azure.
+>
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Configure traffic mirroring](../traffic-mirroring/traffic-mirroring-overview.md)
+
+> [!div class="step-by-step"]
+> [Install OT monitoring software on OT sensors ┬╗](install-software-ot-sensor.md)
defender-for-iot Sites And Zones On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/sites-and-zones-on-premises.md
Title: Create OT sites and zones on an on-premises management console - Microsoft Defender for IoT description: Learn how to create OT networking sites and zones on an on-premises management console to support Zero Trust principles while monitoring OT networks. Previously updated : 02/15/2023- Last updated : 01/08/2023+ - zerotrust-services # Create OT sites and zones on an on-premises management console
+This article is one in a series of articles describing the [deployment path](air-gapped-deploy.md) for a Microsoft Defender for IoT on-premises management console for air-gapped OT sensors.
++ This article describes how to create sites and zones on an on-premises management console, based on the network segments you've identified across your OT environments. Segmenting your network by sites and zones is an integral part of implementing a [Zero Trust security strategy](../concept-zero-trust.md), and assigning sensors to specific sites and zones helps you monitor for unauthorized traffic crossing segments.
-Data ingested from sensors in the same site or zone can be viewed together, segmented out from other data in your system.
+Data ingested from sensors in the same site or zone can be viewed together, segmented out from other data in your system.
If there's sensor data that you want to view grouped together in the same site or zone, make sure to assign sensor sites and zones accordingly.
-An on-premises management console adds the extra layers of *business units* and *regions* to your network segmentation, and also provides an interactive, global map to view all business units, regions, sites, and zones across your network.
+An on-premises management console adds the extra layers of *business units* and *regions* to your network segmentation, and also provides an interactive, global map to view all business units, regions, sites, and zones across your network.
> [!NOTE] > Sites and zones created in on an on-premises management console aren't synchronized with sites and zones created in the Azure portal when [onboarding OT sensors](../onboard-sensors.md#onboard-an-ot-sensor).
An on-premises management console adds the extra layers of *business units* and
- An on-premises management console [installed](install-software-on-premises-management-console.md) and [activated](../how-to-activate-and-set-up-your-on-premises-management-console.md)
+- OT sensors [connected to your on-premises management console](connect-sensors-to-management.md)
+ ## Customize your global map (optional) By default, the on-premises management console shows a blank world map for you to build and monitor your business units, regions, sites, and zones.
For example, if you have multiple offices in the same city, you'd create a separ
:::image type="content" source="../media/sites-and-zones/new-site.png" alt-text="Screenshot of the Create New Site dialog." lightbox="../media/sites-and-zones/new-site.png":::
-1. Repeat this step for each of the sites you want to create, populating the map to cover your entire network. For example:
+ Select **Save** to save your changes.
+
+1. Repeat the previous two steps for each of the sites you want to create, populating the map to cover your entire network. For example:
:::image type="content" source="../media/sites-and-zones/enterprise-map-sample.png" alt-text="Screenshot of a populated Enterprise View map." lightbox="../media/sites-and-zones/enterprise-map-sample.png":::
If you've also configured sites and zones on your on-premises management, assign
**To assign an OT sensor to a site and zone**:
-1. Sign into your on-premises management console and select **Site Management** on the left.
+1. Sign into your on-premises management console and select **Site Management**.
1. In the **Connectivity** column, verify that the sensor is currently connected to the on-premises management console.
The page takes a few moments to refresh with the updated sensor assignments.
**To delete a sensor's zone assignment**:
-1. Sign into your on-premises management console and select **Site Management** on the left.
+1. Sign into your on-premises management console and select **Site Management**.
-1. Locate the sensor who's sensor assignment you want to remove. At the far right of the sensor row, select the **Unassign** :::image type="icon" source="../media/how-to-activate-and-set-up-your-on-premises-management-console/unassign-sensor-icon.png" border="false"::: button.
+1. Locate the sensor who's sensor assignment you want to remove. At the far right of the sensor row, select the **Unassign** :::image type="icon" source="../media/how-to-activate-and-set-up-your-on-premises-management-console/unassign-sensor-icon.png" border="false"::: button.
1. In the confirmation message dialog, select **CONFIRM**.
After you've created sites and zones, you can view, edit, or delete them from bo
- On the **Enterprise View** map, select a site to view all of its zones - On the **Site Management** page, expand or collapse each site to its zones
-For each site or zone, select the options menu on the right to make changes, or to delete a site or zone. For example:
+For each site or zone, select the options menu to make changes, or to delete a site or zone. For example:
:::image type="content" source="../media/sites-and-zones/edit-delete-site-zone.png" alt-text="Screenshot of the options menu for editing or deleting a site or zone." lightbox="../media/sites-and-zones/edit-delete-site-zone.png"::: ## Next steps
-> [!div class="next-steps"]
-> [Tutorial: Monitor your network with Zero Trust principles](../monitor-zero-trust.md)
+> [!div class="step-by-step"]
+> [« Connect OT network sensors to the on-premises management console](connect-sensors-to-management.md)
+
+You've now finished deploying your on-premises management console. For more information, see:
+
+- [Tutorial: Monitor your OT networks with Zero Trust principles](../monitor-zero-trust.md)
+- [Maintain the on-premises management console](../how-to-manage-the-on-premises-management-console.md)
+- [Manage sensors from the on-premises management console](../how-to-manage-sensors-from-the-on-premises-management-console.md)
defender-for-iot Update Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/update-device-inventory.md
+
+ Title: Verify and update detected device inventory - Microsoft Defender for IoT
+description: Learn how to fine-tune your newly detected device inventory on an OT sensor, such as updating device types and properties, merging devices as needed, and more.
Last updated : 03/09/2023+++
+# Verify and update your detected device inventory
+
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and describes how to review your device inventory and enhance security monitoring with fine-tuned device details.
++
+## Prerequisites
+
+Before performing the procedures in this article, make sure that you have:
+
+- An OT sensor [installed](install-software-ot-sensor.md), [activated, and configured](activate-deploy-sensor.md), with device data detected.
+
+- Access to your OT sensor as **Security Analyst** or **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md).
+
+This step is performed by your deployment teams.
+
+## View your device inventory on the Azure portal
+
+1. Sign into your OT sensor and select the **Device inventory** page.
+
+1. Select **Edit Columns** to view additional information in the grid so that you can review the data detected for each device.
+
+ We especially recommend reviewing data for the **Name**, **Class**, **Type**, and **Subtype**, **Authorization**, **Scanner device**, and **Programming device** columns.
+
+1. Understand the devices that the OT sensor's detected, and identify any sensors where you'll need to identify device properties.
+
+## Edit device properties per device
+
+For each device where you need to edit device properties:
+
+1. Select the device in the grid and then select **Edit** to view the editing pane. For example:
+
+ :::image type="content" source="../media/update-device-inventory/edit-device-details.png" alt-text="Screenshot of editing device details from the OT sensor.":::
+
+1. Edit any of the following device properties as needed:
+
+ |Name |Description |
+ |||
+ |**Authorized Device** | Select if the device is a known entity on your network. Defender for IoT doesn't trigger alerts for learned traffic on authorized devices. |
+ |**Name** | By default, shown as the device's IP address. Update this to a meaningful name for your device as needed. |
+ |**Description** | Left blank by default. Enter a meaningful description for your device as needed. |
+ |**OS Platform** | If the operating system value is blocked for detection, select the device's operating system from the dropdown list. |
+ |**Type** | If the device's type is blocked for detection or needs to be modified, select a device type from the dropdown list. For more information, see [Supported devices](../device-inventory.md#supported-devices). |
+ |**Purdue Level** | If the device's Purdue level is detected as **Undefined** or **Automatic**, we recommend selecting another level to fine-tune your data. For more information, see [Placing OT sensors in your network](../best-practices/understand-network-architecture.md#placing-ot-sensors-in-your-network). |
+ |**Scanner** | Select if your device is a scanning device. Defender for IoT doesn't trigger alerts for scanning activities detected on devices defined as scanning devices. |
+ |**Programming device** | Select if your device is a programming device. Defender for IoT doesn't trigger alerts for programming activities detected on devices defined as programming devices. |
+
+1. Select **Save** to save your changes.
+
+## Merge duplicate devices
+
+As you review the devices detected on your device inventory, note whether multiple entries have been detected for the same device on your network.
+
+For example, this might occur when you have a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.
+
+Devices with the same IP and MAC addresses are automatically merged, and identified as the same device. We recommend merging any duplicate devices so that each entry in the device inventory represents only one unique device in your network.
+
+> [!IMPORTANT]
+> Device merges are irreversible. If you merge devices incorrectly, you'll have to delete the merged device and wait for the sensor to rediscover both devices.
+>
+
+To merge multiple devices, select two or more authorized devices in the device inventory and then select **Merge**.
+
+The devices and all their properties are merged in the device inventory. For example, if you merge two devices with different IP addresses, each IP address appears as separate interfaces in the new device.
+
+## Enhance device data (optional)
+
+You may want to increase device visibility and enhance device data with more details than the default data detected.
+
+- To increase device visibility to Windows-based devices, use the Defender for IoT [Windows Management Instrumentation (WMI) tool](../detect-windows-endpoints-script.md).
+
+- If your organization's network policies prevent some data from being ingested, [import the extra data in bulk](../how-to-import-device-information.md).
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Control what traffic is monitored](../how-to-control-what-traffic-is-monitored.md)
+
+> [!div class="step-by-step"]
+> [Create a learned baseline of OT alerts ┬╗](create-learned-baseline.md)
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
# Pre-configured physical appliances for OT monitoring
-This article provides a catalog of the pre-configured appliances available for Microsoft Defender for IoT OT sensors and on-premises management consoles.
+This article is one in a series of articles describing the [deployment path](ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and lists the catalog of the pre-configured appliances available for Microsoft Defender for IoT OT appliances. Use the links in the tables below to jump to articles with more details about each appliance.
-Use the links in the tables below to jump to articles with more details about each appliance.
-
-Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D).
-
-For more information, see [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors).
+Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured appliances. To purchase a pre-configured appliance, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D).
+> [!NOTE]
+> This article also includes information relevant for on-premises management consoles. For more information, see the [Air-gapped OT sensor management deployment path](ot-deploy/air-gapped-deploy.md).
+>
## Advantages of pre-configured appliances Pre-configured physical appliances have been validated for Defender for IoT OT system monitoring, and have the following advantages over installing your own software:
You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsof
|**L500** | [HPE ProLiant DL20 Gen10 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: Up to 200 Mbps<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 | |**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: Up to 10 Mbps <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 | - > [!NOTE] > The performance, capacity, and activity of an OT/IoT network may vary depending on its size, capacity, protocols distribution, and overall activity. For deployments, it is important to factor in raw network speed, the size of the network to monitor, and application configuration. The selection of processors, memory, and network cards is heavily influenced by these deployment configurations. The amount of space needed on your disk will differ depending on how long you store data, and the amount and type of data you store. <br><br> > *Performance values are presented as upper thresholds under the assumption of intermittent traffic profiles, such as those found in OT/IoT systems and machine-to-machine communication networks.*
For information about previously supported legacy appliances, see the [appliance
## Next steps
-Continue understanding system requirements for physical or virtual appliances.
-
-For more information, see [Which appliances do I need?](ot-appliance-sizing.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
-
-Then, use any of the following procedures to continue:
--- [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](how-to-install-software.md)-
-Our OT monitoring appliance reference articles also include extra installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances. For more information, see [OT monitoring appliance reference](appliance-catalog/index.yml).
+> [!div class="step-by-step"]
+> [« Prepare an OT site deployment](best-practices/plan-prepare-deploy.md)
defender-for-iot Ot Virtual Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-virtual-appliances.md
# OT monitoring with virtual appliances
-This article lists the specifications required if you want to install Microsoft Defender for IoT OT sensor and on-premises management console software on your own virtual appliances.
+This article is one in a series of articles describing the [deployment path](ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT, and lists the specifications required if you want to install Microsoft Defender for IoT software on your own virtual appliances.
++
+> [!NOTE]
+> This article also includes information relevant for on-premises management consoles. For more information, see the [Air-gapped OT sensor management deployment path](ot-deploy/air-gapped-deploy.md).
+>
## About hypervisors
An on-premises management console on a virtual appliance is supported for enterp
## Next steps
-Continue understanding system requirements for physical or virtual appliances. For more information, see:
--- [Which appliances do I need?](ot-appliance-sizing.md)-- [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md)-
-Then, use any of the following procedures to continue:
--- [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)-- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console)-- [Install software](how-to-install-software.md)-
-Reference articles for OT monitoring appliances also include installation procedures in case you need to install software on your own appliances, or re-install software on preconfigured appliances.
+> [!div class="step-by-step"]
+> [« Prepare an OT site deployment](best-practices/plan-prepare-deploy.md)
defender-for-iot References Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-data-retention.md
Last updated 01/22/2023
# Data retention across Microsoft Defender for IoT
-Microsoft Defender for IoT sensors learn a baseline of your network traffic during the initial learning period after deployment. This learned baseline is stored indefinately on your sensors.
+Microsoft Defender for IoT sensors learn a baseline of your network traffic during the initial learning period after deployment. This learned baseline is stored indefinitely on your sensors.
Defender for IoT also stores other data in the Azure portal, on OT network sensors, and on-premises management consoles.
Other OT monitoring log files are stored only on the OT network sensor and the o
For more information, see: -- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)-- [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+- [Troubleshoot the sensor](how-to-troubleshoot-sensor.md)
+- [Troubleshoot the on-premises management console](how-to-troubleshoot-on-premises-management-console.md)
## On-premises backup file capacity
On both the OT sensor and the on-premises management console, older backup files
For more information, see: -- [Set up backup and restore files](how-to-manage-individual-sensors.md#set-up-backup-and-restore-files)-- [Configure backup settings for an OT network sensor](how-to-manage-individual-sensors.md#set-up-backup-and-restore-files)-- [Configure OT sensor backup settings from an on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md#backup-storage-for-sensors)-- [Configure backup settings for an on-premises management console](how-to-manage-the-on-premises-management-console.md#define-backup-and-restore-settings)
+- [Set up backup and restore files on an OT sensor](back-up-restore-sensor.md#set-up-backup-and-restore-files)
+- [Configure OT sensor backup settings on an on premises management console](back-up-sensors-from-management.md#configure-ot-sensor-backup-settings)
+- [Configure OT sensor backup settings for an on-premises management console](back-up-sensors-from-management.md#configure-ot-sensor-backup-settings)
### Backups on the OT network sensor
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Earlier versions use a legacy support model, with support dates [detailed for ea
### On-premises appliance security
-The OT network sensor and the on-premises management console are designed as a *locked-down* security appliance with a hardened attack surface. Appliance access and control is allowed only through the [management port](best-practices/understand-network-architecture.md), via HTTP for web access and SSH for the support shell.
+The OT network sensor and the on-premises management console are designed as a *locked-down* security appliance with a hardened attack surface. Appliance access and control are allowed only through the [management port](best-practices/understand-network-architecture.md), via HTTP for web access and SSH for the support shell.
Defender for IoT adheres to the [Microsoft Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) throughout the entire development lifecycle, including activities like training, compliance, code reviews, threat modeling, design requirements, component governance, and pen testing. All appliances are locked down according to industry best practices and should not be modified.
Version 22.3.7 includes the same features as 22.3.6. If you have version 22.3.6
- Support for [deleting multiple devices](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) on OT sensors - An enhanced [editing device details](how-to-investigate-sensor-detections-in-a-device-inventory.md#edit-device-details) process on the OT sensor, using an **Edit** button in the toolbar at the top of the page - [Enhanced UI on the OT sensor for uploading an SSL/TLS certificate](how-to-deploy-certificates.md#deploy-ssltls-certificates-on-ot-appliances)-- [Activation files for locally-managed sensors no longer expire](how-to-manage-individual-sensors.md#upload-a-new-activation-file)
+- [Activation files for locally managed sensors no longer expire](how-to-manage-individual-sensors.md#upload-a-new-activation-file)
- Severity for all [**Suspicion of Malicious Activity**](alert-engine-messages.md#malware-engine-alerts) alerts is now **Critical** - [Allow internet connections on an OT network in bulk](how-to-accelerate-alert-incident-response.md#allow-internet-connections-on-an-ot-network)
This version includes the following new updates and fixes:
This version includes the following new updates and fixes: -- [Diagnostic logs automatically available to support for cloud-connected sensors](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+- [Diagnostic logs automatically available to support for cloud-connected sensors](how-to-troubleshoot-sensor.md#download-a-diagnostics-log-for-support)
- [Rockwell protocol: Device inventory shows PLC operating mode key state, run state, and security mode](how-to-manage-device-inventory-for-organizations.md) - [Automatic CLI session timeouts](references-work-with-defender-for-iot-cli-commands.md) - [Sensor health widgets in the Azure portal](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health)
This version includes the following new updates and fixes:
- [Enhanced sensor Overview page](how-to-manage-individual-sensors.md) -- [New sensor diagnostics log](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+- [New sensor diagnostics log](how-to-troubleshoot-sensor.md#download-a-diagnostics-log-for-support)
- [Alert updates](how-to-view-alerts.md):
This version includes the following new updates and fixes:
- [New connectivity models](architecture-connections.md) -- [New firewall requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)
+- [New firewall requirements](networking-requirements.md#sensor-access-to-azure-portal)
- [Improved support for Profinet DCP, Honeywell, and Windows endpoint detection protocols](concept-supported-protocols.md) - [Sensor reports now accessible from the **Data Mining** page](how-to-create-data-mining-queries.md) -- [Updated process for sensor name changes](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor)
+- [Updated process for sensor name changes](how-to-manage-individual-sensors.md#upload-a-new-activation-file)
- [Site-based access control on the Azure portal](manage-users-portal.md#manage-site-based-access-control-public-preview)
This version includes the following new updates and fixes:
- [PLC operating mode detections](how-to-create-risk-assessment-reports.md) - [New PCAP API](api/management-alert-apis.md#pcap-request-alert-pcap)-- [Export logs from the on-premises management console for troubleshooting](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-logs-from-the-on-premises-management-console-for-troubleshooting)
+- [Export logs from the on-premises management console for troubleshooting](how-to-troubleshoot-on-premises-management-console.md#export-logs-from-the-on-premises-management-console-for-troubleshooting)
- [Support for Webhook extended to send data to endpoints](how-to-forward-alert-information-to-partners.md#webhook-extended) - [Unicode support for certificate passphrases](how-to-deploy-certificates.md)
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-manage-proprietary-protocols.md
# Manage proprietary protocols with Horizon plugins
-You can use the Microsoft Defender for IoT Horizon SDK to develop your plugins to support proprietary protocols used for your IoT and ICS devices.
+If your devices use proprietary protocols not supported [out-of-the-box](concept-supported-protocols.md) by Microsoft Defender for IoT, use the Defender for IoT Open Development Environment (ODE) and SDK to develop your own plugin support. Defender for IoT's SDK provides:
-## About Horizon
-
-Horizon provides:
--- Unlimited, full support for common, proprietary, custom protocols or protocols that deviate from any standard.-- A new level of flexibility and scope for DPI development.
+- Unlimited, full support for common, proprietary, custom protocols, or protocols that deviate from any standard.
+- An extra level of flexibility and scope for DPI development.
- A tool that exponentially expands OT visibility and control, without the need to upgrade to new versions. - The security of allowing proprietary development without divulging sensitive information.
-Use the Horizon SDK to design dissector plugins that decode network traffic so it can be processed by automated Defender for IoT network analysis programs.
+For more information, contact [ms-horizon-support@microsoft.com](mailto:ms-horizon-support@microsoft.com).
-Protocol dissectors are developed as external plugins and are integrated with an extensive range of Defender for IoT services, for example services that provide monitoring, alerting, and reporting capabilities.
+## Prerequisites
-Contact <ms-horizon-support@microsoft.com> for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins.
+Before performing the steps described in this article, make sure that you have:
-## Add a plugin to your sensor
+- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](ot-deploy/activate-deploy-sensor.md), with device data ingested.
-**Prerequisites**:
+- Access to your OT network sensor as an **Admin** user and as a privileged user for CLI access. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-- Access to the plugin developed for your proprietary protocol and the signing certificate you created for it-- Credentials for the Administrator, Cyberx, or Support users
+- Access to the plugin developed for your proprietary protocol and the signing certificate created for it.
+
+## Add a plugin to your OT sensor
After you've developed and tested a dissector plugin for proprietary protocols, add it to any sensors where it's needed. **To upload your plugin to a sensor**:
-1. Sign in to your sensor machine via CLI as the *Administrator*, *Cyberx*, or *Support* user.
-
-1. Go the `/var/cyberx/properties/horizon.properties` file and verify that the `ui.enabled` property is set to `true` (`horizon.properties:ui.enabled=true`)
+1. Sign into your OT sensor machine via SSH / Telnet to access the CLI.
-1. Sign in to the sensor console as the *Administrator*, *Cyberx*, or *Support*.
+1. Go the `/var/cyberx/properties/horizon.properties` file and verify that the `ui.enabled` property is set to `true` (`horizon.properties:ui.enabled=true`).
-1. Select **System settings > Network monitoring > Protocols DPI (Horizon Plugins)**.
+1. Sign into your OT sensor console and select **System settings > Network monitoring > Protocols DPI (Horizon Plugins)**.
- The **Protocols DPI (Horizon Plugins)** page lists all of the infrastructure plugins provided out-of-the-box by Defender for IoT and any other plugin you've created and uploaded to the sensor.
+ The **Protocols DPI (Horizon Plugins)** page lists all of the infrastructure plugins provided out-of-the-box by Defender for IoT and any other plugin you've created and uploaded to the sensor. For example:
:::image type="content" source="media/release-notes/horizon.png" alt-text="Screenshot of the new Protocols D P I (Horizon Plugins) page." lightbox="media/release-notes/horizon.png":::
After you've developed and tested a dissector plugin for proprietary protocols,
### Toggle a plugin on or off
-After you've uploaded a plugin, you can toggle it on or off as needed. Sensors do not handle protocol traffic defined for a plugin that's currently toggled off (disabled).
+After you've uploaded a plugin, you can toggle it on or off as needed. Sensors don't handle protocol traffic defined for a plugin that's currently toggled off (disabled).
> [!NOTE] > Infrastructure plugins cannot be toggled off.
The **Protocols DPI (Horizon Plugins)** lists the following data per plugin:
|Column name |Description | |||
-|**Plugin** | Defines the plugin name |
+|**Plugin** | Defines the plugin name. |
|**Type** | The plugin type, including APPLICATION or INFRASTRUCTURE. | |**Time** | The time that data was last analyzed using the plugin. The time stamp is updated every five seconds. | |**PPS** | The number of packets analyzed per second by the plugin. | |**Bandwidth** | The average bandwidth detected by the plugin within the last five seconds. |
-|**Malforms** | The number of malform errors detected in the last five seconds. Malformed validations are used after the protocol has been positively validated. If there is a failure to process the packets based on the protocol, a failure response is returned. |
+|**Malforms** | The number of malform errors detected in the last five seconds. Malformed validations are used after the protocol has been positively validated. If there's a failure to process the packets based on the protocol, a failure response is returned. |
|**Warnings** | The number of warnings detected, such as when packets match the structure and specifications, but unexpected behavior is detected, based on the plugin warning configuration. | | **Errors** | The number of errors detected in the last five seconds for packets that failed basic protocol validations for the packets that match protocol definitions. |
-Horizon log data is available for export in the **Dissection statistics** and **Dissection Logs**, log files. For more information, see [Export troubleshooting logs](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md).
+Horizon log data is available for export in the **Dissection statistics** and **Dissection Logs**, log files. For more information, see [Export troubleshooting logs](how-to-troubleshoot-sensor.md).
## Create custom alert rules for Horizon-based traffic
-After adding a proprietary plugin to your sensor, you might want to configure custom alert rules for your proprietary protocol. Custom, conditioned-based alert triggers and messages helps to pinpoint specific network activity and effectively update your security, IT, and operational teams.
+After adding a proprietary plugin to your sensor, you might want to configure custom alert rules for your proprietary protocol. Custom, conditioned-based alert triggers and messages help to pinpoint specific network activity and effectively update your security, IT, and operational teams.
Use custom alerts to detect traffic based on protocols and underlying protocols in a proprietary Horizon plugin, or a combination of protocol fields from all protocol layers. Custom alerts also let you write your own alert titles and messages, and handle protocol fields and values in the alert message text.
-For example, in an environment running MODBUS, you may want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and ethernet destination, or an alert when any access is performed to a specific IP address.
+For example, in an environment running MODBUS, you may want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and ethernet destination, or when any access is performed to a specific IP address.
**When an alert is triggered by a Horizon-based custom alert rule**:
For more information, see [Create custom alert rules on an OT sensor](how-to-acc
## Next steps
-For more information, see [Microsoft Defender for IoT - supported IoT, OT, ICS, and SCADA protocols](concept-supported-protocols.md).
+For more information, see [Protocols supported by Defender for IoT](concept-supported-protocols.md)
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
For more information, see:
- [Manage OT monitoring users on the Azure portal](manage-users-portal.md) - [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md)-- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
defender-for-iot Roles On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-on-premises.md
The following roles are available on OT network sensors and on-premises manageme
||| |**Admin** | Admin users have access to all tools, including system configurations, creating and managing users, and more. | |**Security Analyst** | Security Analysts don't have admin-level permissions for configurations, but can perform actions on devices, acknowledge alerts, and use investigation tools. <br><br>Security Analysts can access options on the sensor displayed in the **Discover** and **Analyze** menus on the sensor, and in the **NAVIGATION** and **ANALYSIS** menus on the on-premises management console. |
-|**Read Only** | Read-only users perform tasks such as viewing alerts and devices on the device map. <br><br>Read Only users can access options displayed in the **Discover** and **Analyze** menus on the sensor, in read-only mode, and in the **NAVIGATION** menu on the on-premises management console. |
+|**Read-Only** | Read-only users perform tasks such as viewing alerts and devices on the device map. <br><br>Read-Only users can access options displayed in the **Discover** and **Analyze** menus on the sensor, in read-only mode, and in the **NAVIGATION** menu on the on-premises management console. |
When first deploying an OT monitoring system, sign in to your sensors and on-premises management console with one of the [default, privileged users](#default-privileged-on-premises-users) described above. Create your first **Admin** user, and then use that user to create other users and assign them to roles.
defender-for-iot Track User Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/track-user-activity.md
Audit logs include the following data:
> [!TIP]
-> You may also want to export your audit logs to send them to the support team for extra troubleshooting. For more information, see [Export logs from the on-premises management console for troubleshooting](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#export-logs-from-the-on-premises-management-console-for-troubleshooting).
+> You may also want to export your audit logs to send them to the support team for extra troubleshooting. For more information, see [Export logs from the on-premises management console for troubleshooting](how-to-troubleshoot-on-premises-management-console.md#export-logs-from-the-on-premises-management-console-for-troubleshooting).
> ## Next steps
defender-for-iot Configure Mirror Erspan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-erspan.md
Title: Configure traffic mirroring with an encapsulated remote switched port analyzer (ERSPAN) - Microsoft Defender for IoT description: This article describes traffic mirroring with ERSPAN for monitoring with Microsoft Defender for IoT. Last updated 09/20/2022-+ # Configure traffic mirroring with an encapsulated remote switched port analyzer (ERSPAN)
-Use an encapsulated remote switched port analyzer (ERSPAN) to mirror input interfaces over an IP network to your OT sensor's monitoring interface, when securing remote networks with Defender for IoT.
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
-The sensor's monitoring interface is a promiscuous interface and does not have a specifically allocated IP address. When ERSPAN support is configured, traffic payloads that are ERSPAN encapsulated with GRE tunnel encapsulation will be analyzed by the sensor.
-Use ERSPAN encapsulation when there is a need to extend monitored traffic across Layer 3 domains. ERSPAN is a Cisco proprietary feature and is available only on specific routers and switches. For more information, see the [Cisco documentation](https://learningnetwork.cisco.com/s/article/span-rspan-erspan).
+This article provides high-level guidance for configuring [traffic mirroring with ERSPAN](../best-practices/traffic-mirroring-methods.md#erspan-ports). Specific implementation details will vary depending on your equipment vendor.
-> [!NOTE]
-> This article provides high-level guidance for configuring traffic mirroring with ERSPAN. Specific implementation details will vary depending on your equiptment vendor.
->
+We recommend using your receiving router as the generic routing encapsulation (GRE) tunnel destination.
-## ERSPAN architecture
+## Prerequisites
-ERSPAN sessions include a source session and a destination session configured on different switches. Between the source and destination switches, traffic is encapsulated in GRE, and can be routed over layer 3 networks.
+Before you start, make sure that you understand your plan for network monitoring with Defender for IoT, and the SPAN ports you want to configure.
-For example:
--
-ERSPAN transports mirrored traffic over an IP network using the following process:
-
-1. A source router encapsulates the traffic and sends the packet over the network.
-1. At the destination router, the packet is de-capsulated and sent to the destination interface.
-
-ERSPAN source options include elements such as:
--- Ethernet ports and port channels-- VLANs; all supported interfaces in the VLAN are ERSPAN sources-- Fabric port channels-- Satellite ports and host interface port channels-
-> [!TIP]
-> When configuring ERSPAN, we recommend using your receiving router as the generic routing encapsulation (GRE) tunnel destination.
->
+For more information, see [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md).
## Configure ERSPAN on your OT network sensor
-Newly installed OT network sensors have ERSPAN and GRE header stripping turned off by default. To turn on support for ERSPAN, you'll need to configure your ERSPAN interfaces and then enable the RCDCAP component to restart your monitoring processes.
+Newly installed OT network sensors have ERSPAN and GRE header stripping turned off by default. To turn on support for ERSPAN, you'll need to configure your ERSPAN interfaces, and then enable the RCDCAP component to restart your monitoring processes.
ERSPAN support is configured in the **Select erspan monitor interfaces** screen, which appears during your first software installation on the appliance. For example:
no shut                            
monitor erspan origin ip-address 172.1.2.1 global ```
- For more information, see [CLI command reference from OT network sensors](../cli-ot-sensor.md).
+For more information, see [CLI command reference from OT network sensors](../cli-ot-sensor.md).
++ ## Next steps
-For more information, see:
+> [!div class="step-by-step"]
+> [« Onboard OT sensors to Defender for IoT](../onboard-sensors.md)
+
+> [!div class="step-by-step"]
+> [Provision OT sensors for cloud management ┬╗](../ot-deploy/provision-cloud-management.md)
-- [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md)-- [Prepare your OT network for Microsoft Defender for IoT](../how-to-set-up-your-network.md)
defender-for-iot Configure Mirror Esxi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-esxi.md
Title: Configure a monitoring interface using an ESXi vSwitch - Sample - Microsoft Defender for IoT description: This article describes traffic mirroring methods with an ESXi vSwitch for OT monitoring with Microsoft Defender for IoT. Last updated 09/20/2022-+ # Configure traffic mirroring with a ESXi vSwitch
-While a virtual switch doesn't have mirroring capabilities, you can use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a monitoring port, similar to a [SPAN port](configure-mirror-span.md). A SPAN port on your switch mirrors local traffic from interfaces on the switch to a different interface on the same switch.
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
-Connect the destination switch to your OT network sensor. Make sure to connect both incoming traffic and OT sensor's monitoring interface to the vSwitch in order start monitoring traffic with Defender for IoT.
-*Promiscuous mode* is a mode of operation and a security, monitoring, and administration technique that is defined at the virtual switch or portgroup level. When promiscuous mode is used, any of the virtual machineΓÇÖs network interfaces that are in the same portgroup can view all network traffic that goes through that virtual switch. By default, promiscuous mode is turned off.
+This article describes how to use *Promiscuous mode* in a ESXi vSwitch environment as a workaround for configuring traffic mirroring, similar to a [SPAN port](configure-mirror-span.md). A SPAN port on your switch mirrors local traffic from interfaces on the switch to a different interface on the same switch.
+
+For more information, see [Traffic mirroring with virtual switches](../best-practices/traffic-mirroring-methods.md#traffic-mirroring-with-virtual-switches).
+
+## Prerequisites
+
+Before you start, make sure that you understand your plan for network monitoring with Defender for IoT, and the SPAN ports you want to configure.
+
+For more information, see [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md).
## Configure a monitoring interface using Promiscuous mode
To configure a monitoring interface with Promiscuous mode on an ESXi v-Switch:
1. Connect to the sensor, and verify that mirroring works. + ## Next steps
-For more information, see:
+> [!div class="step-by-step"]
+> [« Onboard OT sensors to Defender for IoT](../onboard-sensors.md)
+
+> [!div class="step-by-step"]
+> [Provision OT sensors for cloud management ┬╗](../ot-deploy/provision-cloud-management.md)
-- [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md)-- [OT network sensor VM (VMware ESXi)](../appliance-catalog/virtual-sensor-vmware.md)-- [Prepare your OT network for Microsoft Defender for IoT](../how-to-set-up-your-network.md)
defender-for-iot Configure Mirror Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-hyper-v.md
Title: Configure a monitoring interface using a Hyper-V vSwitch - Microsoft Defender for IoT description: This article describes traffic mirroring with a Hyper-V vSwitch for OT monitoring with Microsoft Defender for IoT. Last updated 09/20/2022-+ # Configure traffic mirroring with a Hyper-V vSwitch
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
-While a virtual switch doesn't have mirroring capabilities, you can use *Promiscuous mode* in a virtual switch environment as a workaround for configuring a monitoring port, similar to a [SPAN port](configure-mirror-span.md). A SPAN port on your switch mirrors local traffic from interfaces on the switch to a different interface on the same switch.
-Connect the destination switch to your OT network sensor to monitor traffic with Defender for IoT.
+This article describes how to use *Promiscuous mode* in a Hyper-V Vswitch environment as a workaround for configuring traffic mirroring, similar to a [SPAN port](configure-mirror-span.md). A SPAN port on your switch mirrors local traffic from interfaces on the switch to a different interface on the same switch.
-*Promiscuous mode* is a mode of operation and a security, monitoring, and administration technique that is defined at the virtual switch or portgroup level. When promiscuous mode is used, any of the virtual machineΓÇÖs network interfaces in the same portgroup can view all network traffic that goes through that virtual switch. By default, promiscuous mode is turned off.
+For more information, see [Traffic mirroring with virtual switches](../best-practices/traffic-mirroring-methods.md#traffic-mirroring-with-virtual-switches).
## Prerequisites Before you start:
+- Make sure that you understand your plan for network monitoring with Defender for IoT, and the SPAN ports you want to configure.
+
+ For more information, see [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md).
+ - Ensure that there's no instance of a virtual appliance running. - Make sure that you've enabled *Ensure SPAN* on your virtual switch's data port, and not the management port. -- Ensure that the data port SPAN configuration is not configured with an IP address.
+- Ensure that the data port SPAN configuration isn't configured with an IP address.
## Configure a traffic mirroring port with Hyper-V
Before you start:
Use Windows PowerShell or Hyper-V Manager to attach a SPAN virtual interface to the virtual switch you'd [created earlier](#configure-a-traffic-mirroring-port-with-hyper-v).
-If you use PowerShell, you'll define the name of the newly added adapter hardware as `Monitor`. If you use Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`.
+If you use PowerShell, define the name of the newly added adapter hardware as `Monitor`. If you use Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`.
### Attach a SPAN virtual interface to the virtual switch with PowerShell
To verify the monitoring mode status, run:
```PowerShell Get-VMSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Settings" -SwitchName vSwitch_Span -ExternalPort | select -ExpandProperty SettingData ```+ | Parameter | Description | |--|--| |**vSwitch_Span** | Newly added SPAN virtual switch name | + ## Next steps
-For more information, see:
+> [!div class="step-by-step"]
+> [« Onboard OT sensors to Defender for IoT](../onboard-sensors.md)
-- [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md)-- [OT network sensor VM (Microsoft Hyper-V)](../appliance-catalog/virtual-sensor-hyper-v.md)-- [Prepare your OT network for Microsoft Defender for IoT](../how-to-set-up-your-network.md)
+> [!div class="step-by-step"]
+> [Provision OT sensors for cloud management ┬╗](../ot-deploy/provision-cloud-management.md)
defender-for-iot Configure Mirror Rspan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-rspan.md
Title: Configure traffic mirroring with a Remote SPAN (RSPAN) port - Microsoft Defender for IoT description: This article describes how to configure a remote SPAN (RSPAN) port for traffic mirroring when monitoring OT networks with Microsoft Defender for IoT. Last updated 11/08/2022-+ # Configure traffic mirroring with a Remote SPAN (RSPAN) port
-Configure a remote SPAN (RSPAN) session on your switch to mirror traffic from multiple, distributed source ports into a dedicated remote VLAN.
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
-Data in the VLAN is then delivered through trunked ports, across multiple switches to a specified switch that contains the physical destination port. Connect the destination port to your OT network sensor to monitor traffic with Defender for IoT.
-The following diagram shows an example of a remote VLAN architecture:
--
-This article describes a sample procedure for configuring RSPAN on a Cisco 2960 switch with 24 ports running IOS. The steps described are intended as high-level guidance. For more information, see the Cisco documentation.
+This article describes a sample procedure for configuring [RSPAN](../best-practices/traffic-mirroring-methods.md#remote-span-rspan-ports) on a Cisco 2960 switch with 24 ports running IOS.
> [!IMPORTANT]
-> This article is intended only as guidance and not as instructions. Mirror ports on other Cisco operating systems and other switch brands are configured differently.
+> This article is intended only as guidance and not as instructions. Mirror ports on other Cisco operating systems and other switch brands are configured differently. For more information, see your switch documentation.
## Prerequisites
+- Before you start, make sure that you understand your plan for network monitoring with Defender for IoT, and the SPAN ports you want to configure.
+
+ For more information, see [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md).
+ - RSPAN requires a specific VLAN to carry the monitored SPAN traffic between switches. Before you start, make sure that your switch supports RSPAN. - Make sure that the mirroring option on your switch is turned off.
On your destination switch:
1. Save the configuration. + ## Next steps
-For more information, see:
+> [!div class="step-by-step"]
+> [« Onboard OT sensors to Defender for IoT](../onboard-sensors.md)
-- [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md)-- [Prepare your OT network for Microsoft Defender for IoT](../how-to-set-up-your-network.md)
+> [!div class="step-by-step"]
+> [Provision OT sensors for cloud management ┬╗](../ot-deploy/provision-cloud-management.md)
defender-for-iot Configure Mirror Span https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-span.md
Title: Configure mirroring with a switch SPAN port - Microsoft Defender for IoT description: This article describes how to configure a SPAN port for traffic mirroring when monitoring OT networks with Microsoft Defender for IoT. Last updated 09/20/2022-+ # Configure mirroring with a switch SPAN port
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT.
++ Configure a SPAN port on your switch to mirror local traffic from interfaces on the switch to a different interface on the same switch.
-This article provides sample configuration procedures for configuring a SPAN port, using either the Cisco CLI or GUI, for Cisco 2960 switch with 24 ports running IOS.
+This article provides sample configuration processes and procedures for configuring a SPAN port, using either the Cisco CLI or GUI, for a Cisco 2960 switch with 24 ports running IOS.
> [!IMPORTANT]
-> This article is intended only as sample guidance and not as instructions. Mirror ports on other Cisco operating systems and other switch brands are configured differently.
+> This article is intended only as sample guidance and not as instructions. Mirror ports on other Cisco operating systems and other switch brands are configured differently. For more information, see your switch documentation.
+
+## Prerequisites
+
+Before you start, make sure that you understand your plan for network monitoring with Defender for IoT, and the SPAN ports you want to configure.
+
+For more information, see [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md).
## Sample CLI SPAN port configuration (Cisco 2960)
-The following commands show a a sample process for configuring a SPAN port on a Cisco 2960 via CLI:
+The following commands show a sample process for configuring a SPAN port on a Cisco 2960 via CLI:
```cli Cisco2960# configure terminal
switchport trunk encapsulation dot1q
switchport mode trunk ``` + ## Next steps
-For more information, see:
+> [!div class="step-by-step"]
+> [« Onboard OT sensors to Defender for IoT](../onboard-sensors.md)
+
+> [!div class="step-by-step"]
+> [Provision OT sensors for cloud management ┬╗](../ot-deploy/provision-cloud-management.md)
-- [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md)-- [Prepare your OT network for Microsoft Defender for IoT](../how-to-set-up-your-network.md)
defender-for-iot Traffic Mirroring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/traffic-mirroring-overview.md
+
+ Title: Configure traffic mirroring overview - Microsoft Defender for IoT
+description: This article serves as an overview for configuring traffic mirroring for Microsoft Defender for IoT.
Last updated : 03/22/2023+++
+# Configure traffic mirroring
+
+This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT and provides an overview of the procedures for configuring traffic mirroring in your network.
++
+## Prerequisites
+
+Before you configure traffic mirroring, make sure that you've decided on a traffic mirroring method. For more information, see [Prepare an OT site deployment](../best-practices/plan-prepare-deploy.md).
+
+## Traffic mirroring processes
+
+Use one of the following procedures to configure traffic mirroring in your network:
+
+**SPAN ports**:
+
+- [Configure mirroring with a switch SPAN port](configure-mirror-span.md)
+- [Configure traffic mirroring with a Remote SPAN (RSPAN) port](configure-mirror-rspan.md)
+- [Configure traffic mirroring with an encapsulated remote switched port analyzer (ERSPAN)](configure-mirror-erspan.md)
+
+**Virtual switches**:
+
+- [Configure traffic mirroring with a ESXi vSwitch](configure-mirror-esxi.md)
+- [Configure traffic mirroring with a Hyper-V vSwitch](configure-mirror-hyper-v.md)
+
+Defender for IoT also supports traffic mirroring with TAP configurations. For more information, see [Active or passive aggregation (TAP)](../best-practices/traffic-mirroring-methods.md#active-or-passive-aggregation-tap).
+
+## Next steps
+
+> [!div class="step-by-step"]
+> [« Onboard OT sensors to Defender for IoT](../onboard-sensors.md)
+
+> [!div class="step-by-step"]
+> [Provision OT sensors for cloud management ┬╗](../ot-deploy/provision-cloud-management.md)
defender-for-iot Update Legacy Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-legacy-ot-software.md
+
+ Title: Update from legacy Defender for IoT OT monitoring software versions
+description: Learn how to update (upgrade) from legacy Defender for IoT software on OT sensors and on-premises management servers.
Last updated : 02/14/2023++++
+# Update legacy OT sensors
+
+This section describes how to handle updates from legacy sensor versions, earlier than [version 22.x](release-notes.md#versions-221x).
+
+If you have earlier sensor versions installed on cloud-connected sensors, you may also have your cloud connection configured using the legacy IoT Hub method. If so, migrate to a new [cloud-connection method](architecture-connections.md), either [connecting directly](ot-deploy/provision-cloud-management.md) or using a [proxy](connect-sensors.md).
+
+## Update legacy OT sensor software
+
+Updating to version 22.x from an earlier version essentially onboards a new OT sensor, with all of the details from the legacy sensor.
+
+After the update, the newly onboarded, updated sensor requires a new activation file. We also recommend that you remove any resources left from your legacy sensor, such as deleting the sensor from Defender for IoT, and any private IoT Hubs that you'd used.
+
+For more information, see [Versioning and support for on-premises software versions](release-notes.md#versioning-and-support-for-on-premises-software-versions).
+
+**To update a legacy OT sensor version**
+
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update.
+
+1. Select the **Prepare to update to 22.X** option from the toolbar or from the options (**...**) from the sensor row.
+
+1. <a name="activation-file"></a>In the **Prepare to update sensor to version 22.X** message, select **Let's go**.
+
+ A new row is added on the **Sites and sensors** page, representing the newly updated OT sensor. In that row, select to download the activation file.
+
+ [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
+
+ The status for the new OT sensor switches to **Pending activation**.
+
+1. Sign into your OT sensor and select **System settings > Sensor management > Subscription & Mode Activation**.
+
+1. In the **Subscription & Mode Activation** pane, select **Select file**, and then browse to and select the activation file you'd downloaded [earlier](#activation-file).
+
+ Monitor the activation status on the **Sites and sensors** page. When the OT sensor is fully activated:
+
+ - The sensor status and health on the **Sites and sensors** page is updated with the new software version.
+ - On the OT sensor, the **Overview** page shows an activation status of **Valid**.
+
+1. After you've applied your new activation file, make sure to [delete the legacy sensor](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). On the **Sites and sensors** page, select your legacy sensor, and then from the options (**...**) menu for that sensor, select **Delete sensor**.
+
+1. (Optional) After updating from a legacy OT sensor version, you may have leftover IoT Hubs that are no longer in use. In such cases:
+
+ 1. Review your IoT hubs to ensure that they're not being used by other services.
+ 1. Verify that your sensors are connected successfully.
+ 1. Delete any private IoT Hubs that are no longer needed.
+
+ For more information, see the [IoT Hub documentation](../../iot-hub/iot-hub-create-through-portal.md).
+
+## Migrate a cloud connection from the legacy method
+
+If you're an existing customer with a production deployment and sensors connected using the legacy IoT Hub method to connect your OT sensors to Azure, use the following steps to ensure a full and safe migration to the updated connection method.
+
+1. **Review your existing production deployment** and how sensors are currently connected to Azure. Confirm that the sensors in production networks can reach the Azure data center resource ranges.
+
+1. **Determine which connection method is right** for each production site. For more information, see [Choose a sensor connection method](architecture-connections.md#choose-a-sensor-connection-method).
+
+1. **Configure any other resources required**, such as a proxy, VPN, or ExpressRoute. For more information, see [Configure proxy settings on an OT sensor](connect-sensors.md).
+
+ For any connectivity resources outside of Defender for IoT, such as a VPN or proxy, consult with Microsoft solution architects to ensure correct configurations, security, and high availability.
+
+1. **If you have legacy sensor versions installed**, we recommend that you [update your sensors](#update-legacy-ot-sensors) at least to a version 22.1.x or higher. In this case, make sure that you've [updated your firewall rules](ot-deploy/provision-cloud-management.md) and activated your sensor with a new activation file.
+
+ Sign in to each sensor after the update to verify that the activation file was applied successfully. Also check the Defender for IoT **Sites and sensors** page in the Azure portal to make sure that the updated sensors show as **Connected**.
+
+1. **Start migrating with a test lab or reference project** where you can validate your connection and fix any issues found.
+
+1. **Create a plan of action for your migration**, including planning any maintenance windows needed.
+
+1. **After the migration in your production environment**, you can delete any previous IoT Hubs that you had used before the migration. Make sure that any IoT Hubs you delete aren't used by any other
+
+ - If you've upgraded your versions, make sure that all updated sensors indicate software version 22.1.x or higher.
+
+ - Check the active resources in your account and make sure there are no other services connected to your IoT Hub.
+
+ - If you're running a hybrid environment with multiple sensor versions, make sure any sensors with software version 22.1.x can connect to Azure.
+
+ Use firewall rules that allow outbound HTTPS traffic on port 443 to each of the required endpoints. For more information, see [Provision OT sensors for cloud management](ot-deploy/provision-cloud-management.md).
+
+While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with updated connection methods.
+
+## Next steps
+
+For more information, see:
+
+- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
+- [Manage individual OT sensors](how-to-manage-individual-sensors.md)
+- [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md)
+- [Troubleshoot the sensor](how-to-troubleshoot-sensor.md)
+- [Troubleshoot the on-premises management console](how-to-troubleshoot-on-premises-management-console.md)
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
For more information, see [Which appliances do I need?](ot-appliance-sizing.md),
> [!NOTE] > Update files are available for [currently supported versions](release-notes.md) only. If you have OT network sensors with legacy software versions that are no longer supported, open a support ticket to access the relevant files for your update. - ## Prerequisites To perform the procedures described in this article, make sure that you have:
To perform the procedures described in this article, make sure that you have:
|Update scenario |Method details | ||| |**On-premises management console** | If the OT sensors you want to update are connected to an on-premises management console, plan to [update your on-premises management console](#update-the-on-premises-management-console) *before* updating your sensors.|
- |**Cloud-connected sensors** | Cloud connected sensors can be updated remotely, directly from the Azure portal, or manually using a downloaded update package. <br><br>[Remote updates](#update-ot-sensors) require that your OT sensor have version [22.2.3](release-notes.md#2223) or later already installed. |
+ |**Cloud-connected sensors** | Cloud connected sensors can be updated remotely, directly from the Azure portal, or manually using a downloaded update package. <br><br>[Remote updates](#update-ot-sensors) require that your OT sensor has version [22.2.3](release-notes.md#2223) or later already installed. |
|**Locally-managed sensors** | Locally-managed sensors can be updated using a downloaded update package, either via a connected on-premises management console, or directly on an OT sensor console. | - **Required access permissions**:
- - **To download update packages or push updates from the Azure portal**, you'll need access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), and [Owner](../../role-based-access-control/built-in-roles.md#owner) user.
+ - **To download update packages or push updates from the Azure portal**, you'll need access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user.
- **To run updates on an OT sensor or on-premises management console**, you'll need access as an **Admin** user.
To perform the procedures described in this article, make sure that you have:
For more information, see [OT sensor cloud connection methods](architecture-connections.md) and [Connect your OT sensors to the cloud](connect-sensors.md). -- Make sure that your firewall rules are configured as needed for the new version you're updating to.
+- Make sure that your firewall rules are configured as needed for the new version you're updating to.
For example, the new version may require a new or modified firewall rule to support sensor access to the Azure portal. From the **Sites and sensors** page, select **More actions > Download sensor endpoint details** for the full list of endpoints required to access the Azure portal.
To perform the procedures described in this article, make sure that you have:
This section describes how to update Defender for IoT OT sensors using any of the supported methods.
-**Sending or downloading an update package** and **running the update** are two separate steps. Each step can be done one right after the other or at different times.
+**Sending or downloading an update package** and **running the update** are two separate steps. Each step can be done one right after the other or at different times.
-For example, you might want to first send the update to your sensor or download and update package, and then have an administrator run the update later on, during a planned maintenance window.
+For example, you might want to first send the update to your sensor or download an update package, and then have an administrator run the update later on, during a planned maintenance window.
If you're using an on-premises management console, make sure that you've [updated the on-premises management console](#update-the-on-premises-management-console) *before* updating any connected sensors.
This procedure describes how to send a software version update to one or more OT
:::image type="content" source="media/update-ot-software/remote-update-step-1.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/remote-update-step-1.png":::
-1. In the **Send package** pane that appears on the right, check to make sure that you're sending the correct software to the sensor you want to update. To jump to the release notes for the new version, select **Learn more** at the top of the pane.
+1. In the **Send package** pane that appears, check to make sure that you're sending the correct software to the sensor you want to update. To jump to the release notes for the new version, select **Learn more** at the top of the pane.
1. When you're ready, select **Send package**. The software transfer to your sensor machine is started, and you can see the progress in the **Sensor version** column.
Run the sensor update only when you see the :::image type="icon" source="media/u
:::image type="content" source="media/update-ot-software/remote-update-step-2.png" alt-text="Screenshot of the Update sensor option." lightbox="media/update-ot-software/remote-update-step-2.png":::
-1. In the **Update sensor (Preview)** pane that appears on the right, verify your update details.
+1. In the **Update sensor (Preview)** pane that appears, verify your update details.
When you're ready, select **Update now** > **Confirm update**. In the grid, the **Sensor version** value changes to :::image type="icon" source="media/update-ot-software/installing.png" border="false"::: **Installing** until the update is complete, when the value switches to the new sensor version number instead.
This procedure describes how to manually download the new sensor software versio
:::image type="content" source="media/update-ot-software/recommended-version.png" alt-text="Screenshot highlighting the recommended update version for the selected update scenario." lightbox="media/update-ot-software/recommended-version.png":::
-1. Scroll down further in the **Local update** pane and select **Download** to download the update package.
+1. Scroll down further in the **Local update** pane and select **Download** to download the update package.
- The update package is downloaded with a file syntax name of `sensor-secured-patcher-<Version number>.tar`, where `version number` is the version you are updating to.
+ The update package is downloaded with a file syntax name of `sensor-secured-patcher-<Version number>.tar`, where `version number` is the version you're updating to.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
This procedure describes how to manually download the new sensor software versio
:::image type="content" source="media/update-ot-software/sensor-upload-file.png" alt-text="Screenshot of the Software update pane on the OT sensor." lightbox="media/update-ot-software/sensor-upload-file.png":::
- The update process starts, and may take about 30 minute and include one or two reboots. If your machine reboots, make sure to sign in again as prompted.
+ The update process starts, and may take about 30 minutes and include one or two reboots. If your machine reboots, make sure to sign in again as prompted.
# [On-premises management console](#tab/onprem)
This procedure describes how to update several OT sensors simultaneously from an
> [!IMPORTANT] > If you're updating multiple, locally-managed OT sensors, make sure to [update the on-premises management console](#update-an-on-premises-management-console) *before* you update any connected sensors.
->
>
-The software version on your on-premises management console must be equal to that of your most up-to-date sensor version. Each on-premises management console version is backwards compatible to older, supported sensor versions, but cannot connect to newer sensor versions.
+>
+The software version on your on-premises management console must be equal to that of your most up-to-date sensor version. Each on-premises management console version is backwards compatible to older, supported sensor versions, but can't connect to newer sensor versions.
> ### Download the update packages from the Azure portal
The software version on your on-premises management console must be equal to tha
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png" alt-text="Screenshot of on-premises management console with Automatic Version Updates selected." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png"::: > [!IMPORTANT]
- > If your **Automatic Version Updates** option is red, you have a update conflict. An update conflict might occur if you have multiple sensors marked for automatic updates but the sensors currently have different software versions installed. Select the **Automatic Version Updates** option to resolve the conflict.
+ > If your **Automatic Version Updates** option is red, you have an update conflict. An update conflict might occur if you have multiple sensors marked for automatic updates but the sensors currently have different software versions installed. Select the **Automatic Version Updates** option to resolve the conflict.
> 1. Scroll down and on the right, select the **+** in the **Sensor version update** box. Browse to and select the update file you'd downloaded from the Azure portal.
This procedure describes how to update OT sensor software via the CLI, directly
1. Scroll down further in the **Local update** pane and select **Download** to download the software file.
- The update package is downloaded with a file syntax name of `sensor-secured-patcher-<Version number>.tar`, where `version number` is the version you are updating to.
+ The update package is downloaded with a file syntax name of `sensor-secured-patcher-<Version number>.tar`, where `version number` is the version you're updating to.
[!INCLUDE [root-of-trust](includes/root-of-trust.md)]
Updating an on-premises management console takes about 30 minutes.
### Download the update package from the Azure portal This procedure describes how to download an update package for a standalone update. If you're updating your on-premises management console together with connected sensors, we recommend using the **[Update sensors (Preview)](#update-ot-sensors)** menu from on the **Sites and sensors** page instead.
-
+ 1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Getting started** > **On-premises management console**. 1. In the **On-premises management console** area, select the download scenario that best describes your update, and then select **Download**.
This procedure describes how to download an update package for a standalone upda
1. Sign in when prompted and check the version number listed in the bottom-left corner to confirm that the new version is listed.
-## Update legacy OT sensor software
-
-Updating to version 22.x from an earlier version essentially onboards a new OT sensor, with all of the details from the legacy sensor.
-
-After the update, the newly onboarded, updated sensor requires a new activation file. We also recommend that you remove any resources left from your legacy sensor, such as deleting the sensor from Defender for IoT, and any private IoT Hubs that you'd used.
-
-For more information, see [Versioning and support for on-premises software versions](release-notes.md#versioning-and-support-for-on-premises-software-versions).
-
-**To update a legacy OT sensor version**
-
-1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update.
-
-1. Select the **Prepare to update to 22.X** option from the toolbar or from the options (**...**) from the sensor row.
-
-1. <a name="activation-file"></a>In the **Prepare to update sensor to version 22.X** message, select **Let's go**.
-
- A new row is added on the **Sites and sensors** page, representing the newly updated OT sensor. In that row, select to download the activation file.
-
- [!INCLUDE [root-of-trust](includes/root-of-trust.md)]
-
- The status for the new OT sensor switches to **Pending activation**.
-
-1. Sign into your OT sensor and select **System settings > Sensor management > Subscription & Mode Activation**.
-
-1. In the **Subscription & Mode Activation** pane, select **Select file**, and then browse to and select the activation file you'd downloaded [earlier](#activation-file).
-
- Monitor the activation status on the **Sites and sensors** page. When the OT sensor is fully activated:
-
- - The sensor status and health on the **Sites and sensors** page is updated with the new software version
- - On the OT sensor, the **Overview** page shows an activation status of **Valid**.
-
-1. After you've applied your new activation file, make sure to [delete the legacy sensor](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). On the **Sites and sensors** page, select your legacy sensor, and then from the options (**...**) menu for that sensor, select **Delete sensor**.
-
-1. (Optional) After updating from a legacy OT sensor version, you may have leftover IoT Hubs that are no longer in use. In such cases:
-
- 1. Review your IoT hubs to ensure that they're not being used by other services.
- 1. Verify that your sensors are connected successfully.
- 1. Delete any private IoT Hubs that are no longer needed.
-
- For more information, see the [IoT Hub documentation](../../iot-hub/iot-hub-create-through-portal.md).
- ## Next steps For more information, see: - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Configure OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md)-- [Manage individual sensors from the OT sensor console](how-to-manage-individual-sensors.md)-- [Manage OT sensors from the on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
+- [Manage individual OT sensors](how-to-manage-individual-sensors.md)
- [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md)-- [Troubleshoot the OT sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the sensor](how-to-troubleshoot-sensor.md)
+- [Troubleshoot the on-premises management console](how-to-troubleshoot-on-premises-management-console.md)
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
For more information, see [Defender for IoT device inventory](device-inventory.m
### Learn DNS traffic by configuring allowlists
-The *support* user can now decrease the number of unauthorized internet alerts by creating an allowlist of domain names on your OT sensor.
+The *support* user can now decrease the number of unauthorized internet alerts by creating an allowlist of domain names on your OT sensor.
When a DNS allowlist is configured, the sensor checks each unauthorized internet connectivity attempt against the list before triggering an alert. If the domain's FQDN is included in the allowlist, the sensor doesnΓÇÖt trigger the alert and allows the traffic automatically.
All OT sensor users can view the list of allowed DNS domains and their resolved
For example: :::image type="content" source="media/release-notes/data-mining-allowlist.png" alt-text="Screenshot of how to create a data mining report for DNS allowlists.":::
-
+ For more information, see [Allow internet connections on an OT network](how-to-accelerate-alert-incident-response.md#allow-internet-connections-on-an-ot-network) and [Create data mining queries](how-to-create-data-mining-queries.md).
-
### Device data retention updates The device data retention period on the OT sensor and on-premises management console has been updated to 90 days from the date of the **Last activity** value.
For more information, see [Deploy SSL/TLS certificates on OT appliances](how-to-
Activation files on locally-managed OT sensors now remain activated for as long as your Defender for IoT plan is active on your Azure subscription, just like activation files on cloud-connected OT sensors.
-You'll only need to update your activation file if you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software) or switching the sensor management mode, such as moving from locally-managed to cloud-connected.
+You'll only need to update your activation file if you're [updating an OT sensor from a legacy version](update-legacy-ot-software.md#update-legacy-ot-sensor-software) or switching the sensor management mode, such as moving from locally-managed to cloud-connected.
For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md).
The following enhancements were added to the OT sensor's device inventory in ver
- A smoother process for [editing device details](how-to-investigate-sensor-detections-in-a-device-inventory.md#edit-device-details) on the OT sensor. Edit device details directly from the device inventory page on the OT sensor console using the new **Edit** button in the toolbar at the top of the page. - The OT sensor now supports [deleting multiple devices](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) simultaneously.-- The procedures for [merging](how-to-investigate-sensor-detections-in-a-device-inventory.md#merge-devices) and [deleting](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) devices now include confirmation messages that appear when the action has completed.
+- The procedures for [merging](how-to-investigate-sensor-detections-in-a-device-inventory.md#merge-devices) and [deleting](how-to-investigate-sensor-detections-in-a-device-inventory.md#delete-devices) devices now include confirmation messages that appear when the action has completed.
For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).
All alerts with the **Suspicion of Malicious Activity** category now have a seve
For more information, see [Malware engine alerts](alert-engine-messages.md#malware-engine-alerts).
-
### Automatically resolved device notifications Starting in version 22.3.6, selected notifications on the OT sensor's **Device map** page are now automatically resolved if they aren't dismissed or otherwise handled within 14 days.
For more information, see [Tutorial: Investigate and detect threats for IoT devi
| **OT networks** | **Cloud features**: <br>- [Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2](#microsoft-sentinel-microsoft-defender-for-iot-solution-version-202) <br>- [Download updates from the Sites and sensors page (Public preview)](#download-updates-from-the-sites-and-sensors-page-public-preview) <br>- [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br>- [Device inventory GA in the Azure portal](#device-inventory-ga-in-the-azure-portal) <br>- [Device inventory grouping enhancements (Public preview)](#device-inventory-grouping-enhancements-public-preview) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) | | **Enterprise IoT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) | -- ### Microsoft Sentinel: Microsoft Defender for IoT solution version 2.0.2 [Version 2.0.2](release-notes-sentinel.md#version-202) of the Microsoft Defender for IoT solution is now available in the [Microsoft Sentinel content hub](../../sentinel/sentinel-solutions-catalog.md), with improvements in analytics rules for incident creation, an enhanced incident details page, and performance improvements for analytics rule queries.
For more information, see:
- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md) - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Networking requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)
+- [Networking requirements](networking-requirements.md#sensor-access-to-azure-portal)
### Investigation enhancements with IoT device entities in Microsoft Sentinel
Now, for locally managed sensors, you can upload that diagnostic log directly on
> For more information, see: -- [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+- [Download a diagnostics log for support](how-to-troubleshoot-sensor.md#download-a-diagnostics-log-for-support)
- [Upload a diagnostics log for support](how-to-manage-sensors-on-the-cloud.md#upload-a-diagnostics-log-for-support) ### Improved security for uploading protocol plugins
Check out our new structure to follow through viewing devices and assets, managi
- [Tutorial: Microsoft Defender for IoT trial setup](tutorial-onboarding.md) - [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md) - [Plan your sensor connections for OT monitoring](best-practices/plan-network-monitoring.md)-- [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md) > [!NOTE] > To send feedback on docs via GitHub, scroll to the bottom of the page and select the **Feedback** option for **This page**. We'd be glad to hear from you!
Now you can get a summary of the log and system information that gets added to y
:::image type="content" source="media/release-notes/support-ticket-diagnostics.png" alt-text="Screenshot of the Backup and Restore dialog showing the Support Ticket Diagnostics option." lightbox="media/release-notes/support-ticket-diagnostics.png":::
-For more information, see [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support)
+For more information, see [Download a diagnostics log for support](how-to-troubleshoot-sensor.md#download-a-diagnostics-log-for-support)
### Alert updates
For more information, see [Update OT system software](update-ot-software.md).
Defender for IoT version 22.1.x supports a new set of sensor connection methods that provide simplified deployment, improved security, scalability, and flexible connectivity.
-In addition to [migration steps](connect-sensors.md#migration-for-existing-customers), this new connectivity model requires that you open a new firewall rule. For more information, see:
+In addition to [migration steps](update-legacy-ot-software.md#migrate-a-cloud-connection-from-the-legacy-method), this new connectivity model requires that you open a new firewall rule. For more information, see:
-- **New firewall requirements**: [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
+- **New firewall requirements**: [Sensor access to Azure portal](networking-requirements.md#sensor-access-to-azure-portal).
- **Architecture**: [Sensor connection methods](architecture-connections.md) - **Connection procedures**: [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md)
The following Defender for IoT options and configurations have been moved, remov
- Reports previously found on the **Reports** page are now shown on the **Data Mining** page instead. You can also continue to view data mining information directly from the on-premises management console. -- Changing a locally managed sensor name is now supported only by onboarding the sensor to the Azure portal again with the new name. Sensor names can no longer be changed directly from the sensor. For more information, see [Change the name of a sensor](how-to-manage-individual-sensors.md#change-the-name-of-a-sensor).
+- Changing a locally managed sensor name is now supported only by onboarding the sensor to the Azure portal again with the new name. Sensor names can no longer be changed directly from the sensor. For more information, see [Upload a new activation file](how-to-manage-individual-sensors.md#upload-a-new-activation-file).
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+[Getting started with Defender for IoT](getting-started.md)
external-attack-surface-management Data Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/data-connections.md
Last updated 03/20/2023
-# Defender EASM data connections
-
-## **Overview**
+# Leveraging data connections
Microsoft Defender External Attack Surface Management (Defender EASM) now offers data connections to help users seamlessly integrate their attack surface data into other Microsoft solutions to supplement existing workflows with new insights. Users must get data from Defender EASM into the other security tools they use for remediation purposes to best operationalize their attack surface data.
The data connector sends Defender EASM asset data to two different platforms: Mi
[Azure Data Explorer](/azure/data-explorer/data-explorer-overview) is a big data analytics platform that helps users analyze high volumes of data from various sources with flexible customization capabilities. Defender EASM asset and insights data can be integrated to leverage visualization, query, ingestion and management capabilities within the platform. Whether building custom reports with Power BI or hunting for assets that match precise KQL queries, exporting Defender EASM data to Azure Data Explorer enables users to leverage their attack surface data with endless customization potential.
-**Data content options**
+## Data content options
+ <br>Defender EASM data connections offers users the ability to integrate two different kinds of attack surface data into the tool of their choice. Users can elect to migrate asset data, attack surface insights or both data types. Asset data provides granular details about your entire inventory, whereas attack surface insights provide immediately actionable insights based on Defender EASM dashboards. To accurately present the infrastructure that matters most to your organization, please note that both content options will only include assets in the ΓÇ£Approved InventoryΓÇ¥ state. - **Asset data** <br>The Asset Data option will send data about all your inventory assets to the tool of your choice. This option is best for use cases where the granular underlying metadata is key to the operationalization of your Defender EASM integration (e.g. Sentinel, customized reporting in Data Explorer). Users can export high-level context on every asset in inventory as well as granular details specific to the particular asset type. This option does not provide any pre-determined insights about the assets; instead, it offers an expansive amount of data so users can surface the customized insights they care about most.
To accurately present the infrastructure that matters most to your organization,
**Attack surface insights** <br>Attack Surface Insights provide an actionable set of results based on the key insights delivered through dashboards in Defender EASM. This option provides less granular metadata on each asset; instead, it categorizes assets based on the corresponding insight(s) and provides the high-level context required to investigate further. This option is ideal for those who want to integrate these pre-determined insights into custom reporting workflows in conjunction with data from other tools. + ## **Configuring data connections**
firewall Enable Top Ten And Flow Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/enable-top-ten-and-flow-trace.md
+
+ Title: Enable Top 10 flows and Flow trace logs in Azure Firewall
+description: Learn how to enable the Top 10 flows and Flow trace logs in Azure Firewall
++++ Last updated : 03/27/2023+++
+# Enable Top 10 flows (preview) and Flow trace logs (preview) in Azure Firewall
+
+Azure Firewall has two new diagnostics logs you can use to help monitor your firewall:
+
+- Top 10 flows
+- Flow trace
+
+## Top 10 flows
+
+The Top 10 flows log (known in the industry as Fat Flows), shows the top connections that are contributing to the highest throughput through the firewall.
+
+### Prerequisites
+
+- Enable [structured logs](firewall-structured-logs.md#enabledisable-structured-logs)
+- Use the Azure Resource Specific Table format in [Diagnostic Settings](firewall-diagnostics.md#enable-diagnostic-logging-through-the-azure-portal).
+
+### Enable the log
+
+Enable the log using the following Azure PowerShell commands:
+
+```azurepowershell
+Set-AzContext -SubscriptionName <SubscriptionName>
+$firewall = Get-AzFirewall- ResourceGroupName <ResourceGroupName> -Name <FirewallName>
+$firewall.EnableFatFlowLogging = $true
+Set-AzFirewall -AzureFirewall $firewall
+```
+### Verify the update
+
+There are a few ways to verify the update was successful, but you can navigate to firewall **Overview** and select **JSON view** on the top right corner. HereΓÇÖs an example:
++
+### Create a diagnostic setting and enable Resource Specific Table
+
+1. In the Diagnostic settings tab, select **Add diagnostic setting**.
+2. Type a Diagnostic setting name.
+3. Select **Azure Firewall Fat Flow Log** under **Categories** and any other logs you want to be supported in the firewall.
+4. In Destination details, select **Send to Log Analytics** workspace
+ 1. Choose your desired Subscription and preconfigured Log Analytics workspace.
+ 1. Enable **Resource specific**.
+ :::image type="content" source="media/enable-top-ten-and-flow-trace/log-destination-details.png" alt-text="Screenshot showing log destination details.":::
+
+### View and analyze Azure Firewall logs
+
+1. On a firewall resource, navigate to **Logs** under the **Monitoring** tab.
+2. Select **Queries**, then load **Azure Firewall Top Flow Logs** by hovering over the option and selecting **Load to editor**.
+3. When the query loads, select **Run**.
+
+ :::image type="content" source="media/enable-top-ten-and-flow-trace/top-ten-flow-log.png" alt-text="Screenshot showing the Top 10 flow log." lightbox="media/enable-top-ten-and-flow-trace/top-ten-flow-log.png":::
+
+## Flow trace
+
+Currently, the firewall logs show traffic through the firewall in the first attempt of a TCP connection, known as the *syn* packet. However, this doesn't show the full journey of the packet in the TCP handshake. As a result, it's difficult to troubleshoot if a packet is dropped, or asymmetric routing has occurred.
+
+The following additional properties can be added:
+- SYN-ACK
+- FIN
+- FIN-ACK
+- RST
+- INVALID (flows)
+
+### Prerequisites
+
+- Enable [structured logs](firewall-structured-logs.md#enabledisable-structured-logs)
+- Use the Azure Resource Specific Table format in [Diagnostic Settings](firewall-diagnostics.md#enable-diagnostic-logging-through-the-azure-portal).
+
+### Enable the log
+
+Enable the log using the following Azure PowerShell commands:
+
+```azurepowershell
+Connect-AzAccount
+Select-AzSubscription -Subscription <subscription_id> or <subscription_name>
+Register-AzProviderFeature -FeatureName AFWEnableTcpConnectionLogging -ProviderNamespace Microsoft.Network
+Register-AzResourceProvider -ProviderNamespace Microsoft.Network
+```
+### Create a diagnostic setting and enable Resource Specific Table
+
+1. In the Diagnostic settings tab, select **Add diagnostic setting**.
+2. Type a Diagnostic setting name.
+3. Select **Azure Firewall Fat Flow Log** under **Categories** and any other logs you want to be supported in the firewall.
+4. In Destination details, select **Send to Log Analytics** workspace
+ 1. Choose your desired Subscription and preconfigured Log Analytics workspace.
+ 1. Enable **Resource specific**.
+ :::image type="content" source="media/enable-top-ten-and-flow-trace/log-destination-details.png" alt-text="Screenshot showing log destination details.":::
+
+### View and analyze Azure Firewall Flow trace logs
+
+1. On a firewall resource, navigate to **Logs** under the **Monitoring** tab.
+2. Select **Queries**, then load **Azure Firewall flow trace logs** by hovering over the option and selecting **Load to editor**.
+3. When the query loads, select **Run**.
+
+ :::image type="content" source="media/enable-top-ten-and-flow-trace/trace-flow-logs.png" alt-text="Screenshot showing the Trace flow log." lightbox="media/enable-top-ten-and-flow-trace/trace-flow-logs.png":::
++
+## Next steps
+
+- [Azure Structured Firewall Logs (preview)](firewall-structured-logs.md)
firewall Explicit Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/explicit-proxy.md
+
+ Title: Azure Firewall Explicit proxy (preview)
+description: Learn about Azure Firewall's Explicit Proxy setting.
++++ Last updated : 03/30/2023+++
+# Azure Firewall Explicit proxy (preview)
+
+> [!IMPORTANT]
+> Explicit proxy is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Firewall operates in a transparent proxy mode by default. In this mode, traffic is sent to the firewall using a user defined route (UDR) configuration. The firewall intercepts that traffic inline and passes it to the destination.
+
+With Explicit proxy set on the outbound path, you can configure a proxy setting on the sending application (such as a web browser) with Azure Firewall configured as the proxy. As a result, traffic from the sending application goes to the firewall's private IP address and therefore egresses directly from the firewall without the using a UDR.
+
+With the Explicit proxy mode (supported for HTTP/S), you can define proxy settings in the browser to point to the firewall private IP address. You can manually configure the IP address on the browser or application, or you can configure a proxy auto config (PAC) file. The firewall can host the PAC file to serve the proxy requests after you upload it to the firewall.
+
+## Configuration
+
+Once the feature is enabled, the following screen shows on portal:
++
+> [!NOTE]
+> The HTTP and HTTPS ports can't be the same.
+
+Next, to allow the traffic to pass through the Firewall, create an application rule in the Firewall policy to allow this traffic.
+
+To use the Proxy autoconfiguration (PAC) file, select **Enable proxy auto-configuration**.
++
+First, upload the PAC file to a storage container that you create. Then, on the **Enable explicit proxy** page, configure the shared access signature (SAS) URL. Configure the port where the PAC is served from, and then select **Apply** at the bottom of the page.
+
+The SAS URL must have READ permissions so the firewall can upload the file. If changes are made to the PAC file, a new SAS URL needs to be generated and configured on the firewall **Enable explicit proxy** page.
+
+## Next steps
+
+To learn how to deploy an Azure Firewall, see [Deploy and configure Azure Firewall using Azure PowerShell](deploy-ps.md).
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
For more information, see [Azure Structured Firewall Logs (preview)](firewall-st
Policy Analytics provides insights, centralized visibility, and control to Azure Firewall. IT teams today are challenged to keep Firewall rules up to date, manage existing rules, and remove unused rules. Any accidental rule updates can lead to a significant downtime for IT teams.
+### Explicit proxy (preview)
+
+With the Azure Firewall Explicit proxy set on the outbound path, you can configure a proxy setting on the sending application (such as a web browser) with Azure Firewall configured as the proxy. As a result, traffic from a sending application goes to the firewall's private IP address, and therefore egresses directly from the firewall without using a user defined route (UDR).
+
+For more information, see [Azure Firewall Explicit proxy (preview)](explicit-proxy.md).
++ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
firewall Firewall Structured Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-structured-logs.md
New resource specific tables are now available in Diagnostic setting that allows
- [Application rule aggregation log](/azure/azure-monitor/reference/tables/azfwapplicationruleaggregation) - Contains aggregated Application rule log data for Policy Analytics. - [Network rule aggregation log](/azure/azure-monitor/reference/tables/azfwnetworkruleaggregation) - Contains aggregated Network rule log data for Policy Analytics. - [NAT rule aggregation log](/azure/azure-monitor/reference/tables/azfwnatruleaggregation) - Contains aggregated NAT rule log data for Policy Analytics.
+- [Top 10 flows log (preview)](/azure/azure-monitor/reference/tables/azfwfatflow) - The Top 10 Flows (Fat Flows) log shows the top connections that are contributing to the highest throughput through the firewall.
+- [Flow trace (preview)](/azure/azure-monitor/reference/tables/azfwflowtrace) - Contains flow information, flags, and the time period when the flows were recorded. You'll be able to see full flow information such as SYN, SYN-ACK, FIN, FIN-ACK, RST, INVALID (flows).
## Enable/disable structured logs
hdinsight Hdinsight Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-private-link.md
Previously updated : 02/02/2023 Last updated : 03/30/2023 # Enable Private Link on an HDInsight cluster
To create the private endpoints:
Once the private endpoints are created, youΓÇÖre done with this phase of the setup. If you didnΓÇÖt make a note of the private IP addresses assigned to the endpoints, follow the steps below: 1. Open the client VNET in the Azure portal.
-1. Click the 'Overview' tab.
-1. You should see both the Ambari and ssh Network interfaces listed and their private IP Addresses.
+1. Click on 'Private endpoints' tab.
+1. You should see both the Ambari and ssh Network interfaces listed.
+1. Click on each one and navigate to the ΓÇÿDNS configurationΓÇÖ blade to see the private IP address.
1. Make a note of these IP addresses because they are required to connect to the cluster and properly configure DNS. ## <a name="ConfigureDNS"></a>Step 6: Configure DNS to connect over private endpoints
To configure DNS resolution through a Private DNS zone:
1. Open the private DNS zone in the Azure portal. 1. Click the 'Virtual network links' tab. 1. Click the 'Add' button.
- 1. Fill in the details: Link name, Subscription, and Virtual Network
+ 1. Fill in the details: Link name, Subscription, and Virtual Network (your client VNET)
1. Click **Save**. :::image type="content" source="media/hdinsight-private-link/virtual-network-link.png" alt-text="Diagram of virtual-network-link.":::
iot-develop Concepts Manage Device Reconnections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-manage-device-reconnections.md
+
+ Title: Manage device reconnections to create resilient applications
+
+description: Manage the device connection and reconnection process to ensure resilient applications by using the Azure IoT Hub device SDKs.
+++ Last updated : 03/30/2023+++++
+# Manage device reconnections to create resilient applications
+
+This article provides high-level guidance to help you design resilient applications by adding a device reconnection strategy. It explains why devices disconnect and need to reconnect. And it describes specific strategies that developers can use to reconnect devices that have been disconnected.
+
+## What causes disconnections
+The following are the most common reasons that devices disconnect from IoT Hub:
+
+- Expired SAS token or X.509 certificate. The device's SAS token or X.509 authentication certificate expired.
+- Network interruption. The device's connection to the network is interrupted.
+- Service disruption. The Azure IoT Hub service experiences errors or is temporarily unavailable.
+- Service reconfiguration. After you reconfigure IoT Hub service settings, it can cause devices to require reprovisioning or reconnection.
+
+## Why you need a reconnection strategy
+
+It's important to have a strategy to reconnect devices as described in the following sections. Without a reconnection strategy, you could see a negative effect on your solution's performance, availability, and cost.
+
+### Mass reconnection attempts could cause a DDoS
+
+A high number of connection attempts per second can cause a condition similar to a distributed denial-of-service attack (DDoS). This scenario is relevant for large fleets of devices numbering in the millions. The issue can extend beyond the tenant that owns the fleet, and affect the entire scale-unit. A DDoS could drive a large cost increase for your Azure IoT Hub resources, due to a need to scale out. A DDoS could also hurt your solution's performance due to resource starvation. In the worse case, a DDoS can cause service interruption.
+
+### Hub failure or reconfiguration could disconnect many devices
+
+After an IoT hub experiences a failure, or after you reconfigure service settings on an IoT hub, devices might be disconnected. For proper failover, disconnected devices require reprovisioning. To learn more about failover options, see [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md).
+
+### Reprovisioning many devices could increase costs
+
+After devices disconnect from IoT Hub, the optimal solution is to reconnect the device rather than reprovision it. If you use IoT Hub with DPS, DPS has a per provisioning cost. If you reprovision many devices on DPS, it increases the cost of your IoT solution. To learn more about DPS provisioning costs, see [IoT Hub DPS pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+
+## Design for resiliency
+
+IoT devices often rely on noncontinuous or unstable network connections (for example, GSM or satellite). Errors can occur when devices interact with cloud-based services because of intermittent service availability and infrastructure-level or transient faults. An application that runs on a device has to manage the mechanisms for connection, reconnection, and the retry logic for sending and receiving messages. Also, the retry strategy requirements depend heavily on the device's IoT scenario, context, capabilities.
+
+The Azure IoT Hub device SDKs aim to simplify connecting and communicating from cloud-to-device and device-to-cloud. These SDKs provide a robust way to connect to Azure IoT Hub and a comprehensive set of options for sending and receiving messages. Developers can also modify existing implementation to customize a better retry strategy for a given scenario.
+
+The relevant SDK features that support connectivity and reliable messaging are available in the following IoT Hub device SDKs. For more information, see the API documentation or specific SDK:
+
+* [C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/main/doc/connection_and_messaging_reliability.md)
+
+* [.NET SDK](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/devdoc/retrypolicy.md)
+
+* [Java SDK](https://github.com/Azure/azure-iot-sdk-jav)
+
+* [Node SDK](https://github.com/Azure/azure-iot-sdk-node/wiki/Connectivity-and-Retries)
+
+* [Python SDK](https://github.com/Azure/azure-iot-sdk-python)
+
+The following sections describe SDK features that support connectivity.
+
+## Connection and retry
+
+This section gives an overview of the reconnection and retry patterns available when managing connections. It details implementation guidance for using a different retry policy in your device application and lists relevant APIs from the device SDKs.
+
+### Error patterns
+
+Connection failures can happen at many levels:
+
+* Network errors: disconnected socket and name resolution errors
+
+* Protocol-level errors for HTTP, AMQP, and MQTT transport: detached links or expired sessions
+
+* Application-level errors that result from either local mistakes: invalid credentials or service behavior (for example, exceeding the quota or throttling)
+
+The device SDKs detect errors at all three levels. However, device SDKs don't detect and handle OS-related errors and hardware errors. The SDK design is based on [The Transient Fault Handling Guidance](/azure/architecture/best-practices/transient-faults#general-guidelines) from the Azure Architecture Center.
+
+### Retry patterns
+
+The following steps describe the retry process when connection errors are detected:
+
+1. The SDK detects the error and the associated error in the network, protocol, or application.
+
+1. The SDK uses the error filter to determine the error type and decide if a retry is needed.
+
+1. If the SDK identifies an **unrecoverable error**, operations like connection, send, and receive are stopped. The SDK notifies the user. Examples of unrecoverable errors include an authentication error and a bad endpoint error.
+
+1. If the SDK identifies a **recoverable error**, it retries according to the specified retry policy until the defined timeout elapses. The SDK uses **Exponential back-off with jitter** retry policy by default.
+
+1. When the defined timeout expires, the SDK stops trying to connect or send. It notifies the user.
+
+1. The SDK allows the user to attach a callback to receive connection status changes.
+
+The SDKs typically provide three retry policies:
+
+* **Exponential back-off with jitter**: This default retry policy tends to be aggressive at the start and slow down over time until it reaches a maximum delay. The design is based on [Retry guidance from Azure Architecture Center](/azure/architecture/best-practices/retry-service-specific).
+
+* **Custom retry**: For some SDK languages, you can design a custom retry policy that is better suited for your scenario and then inject it into the RetryPolicy. Custom retry isn't available on the C SDK, and it isn't currently supported on the Python SDK. The Python SDK reconnects as-needed.
+
+* **No retry**: You can set retry policy to "no retry", which disables the retry logic. The SDK tries to connect once and send a message once, assuming the connection is established. This policy is typically used in scenarios with bandwidth or cost concerns. If you choose this option, messages that fail to send are lost and can't be recovered.
+
+### Retry policy APIs
+
+| SDK | SetRetryPolicy method | Policy implementations | Implementation guidance |
+|||||
+| C | [IOTHUB_CLIENT_RESULT IoTHubDeviceClient_SetRetryPolicy](https://azure.github.io/azure-iot-sdk-c/iothub__device__client_8h.html#a53604d8d75556ded769b7947268beec8) | See: [IOTHUB_CLIENT_RETRY_POLICY](https://azure.github.io/azure-iot-sdk-c/iothub__client__core__common_8h.html#a361221e523247855ff0a05c2e2870e4a) | [C implementation](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/connection_and_messaging_reliability.md) |
+| Java | [SetRetryPolicy](/jav) |
+| .NET | [DeviceClient.SetRetryPolicy](/dotnet/api/microsoft.azure.devices.client.deviceclient.setretrypolicy) | **Default**: [ExponentialBackoff class](/dotnet/api/microsoft.azure.devices.client.exponentialbackoff)<BR>**Custom:** implement [IRetryPolicy interface](/dotnet/api/microsoft.azure.devices.client.iretrypolicy)<BR>**No retry:** [NoRetry class](/dotnet/api/microsoft.azure.devices.client.noretry) | [C# implementation](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/devdoc/retrypolicy.md) |
+| Node | [setRetryPolicy](/javascript/api/azure-iot-device/client#azure-iot-device-client-setretrypolicy) | **Default**: [ExponentialBackoffWithJitter class](/javascript/api/azure-iot-common/exponentialbackoffwithjitter)<BR>**Custom:** implement [RetryPolicy interface](/javascript/api/azure-iot-common/retrypolicy)<BR>**No retry:** [NoRetry class](/javascript/api/azure-iot-common/noretry) | [Node implementation](https://github.com/Azure/azure-iot-sdk-node/wiki/Connectivity-and-Retries) |
+| Python | Not currently supported | Not currently supported | Built-in connection retries: Dropped connections are retried with a fixed 10-second interval by default. This functionality can be disabled if desired, and the interval can be configured. |
+
+## Hub reconnection flow
+
+If you use IoT Hub only without DPS, use the following reconnection strategy.
+
+When a device fails to connect to IoT Hub, or is disconnected from IoT Hub:
+
+1. Use an exponential back-off with jitter delay function.
+1. Reconnect to IoT Hub.
+
+The following diagram summarizes the reconnection flow:
+++
+## Hub with DPS reconnection flow
+
+If you use IoT Hub with DPS, use the following reconnection strategy.
+
+When a device fails to connect to IoT Hub, or is disconnected from IoT Hub, reconnect based on the following cases:
+
+|Reconnection scenario | Reconnection strategy |
+|||
+|For errors that allow connection retries (HTTP response code 500) | Use an exponential back-off with jitter delay function. <br> Reconnect to IoT Hub. |
+|For errors that indicate a retry is possible, but reconnection has failed 10 consecutive times | Reprovision the device to DPS. |
+|For errors that don't allow connection retries (HTTP responses 401, Unauthorized or 403, Forbidden or 404, Not Found) | Reprovision the device to DPS. |
+
+The following diagram summarizes the reconnection flow:
++
+## Next steps
+
+Suggested next steps include:
+
+- [Troubleshoot device disconnects](../iot-hub/iot-hub-troubleshoot-connectivity.md)
+
+- [Deploy devices at scale](../iot-dps/concepts-deploy-at-scale.md)
iot-develop How To Use Reliability Features In Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/how-to-use-reliability-features-in-sdks.md
- Title: Manage connectivity and reliable messaging-
-description: How to manage device connectivity and ensure reliable messaging when you use the Azure IoT Hub device SDKs
--- Previously updated : 02/20/2023-----
-# Manage connectivity and reliable messaging by using Azure IoT Hub device SDKs
-
-This article provides high-level guidance to help you design device applications that are more resilient. It shows you how to take advantage of the connectivity and reliable messaging features in Azure IoT device SDKs. The goal of this guide is to help you manage the following scenarios:
-
-* Fixing a dropped network connection
-
-* Switching between different network connections
-
-* Reconnecting because of transient service connection errors
-
-Implementation details vary by language. For more information, see the API documentation or specific SDK:
-
-* [C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/main/doc/connection_and_messaging_reliability.md)
-
-* [.NET SDK](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/devdoc/retrypolicy.md)
-
-* [Java SDK](https://github.com/Azure/azure-iot-sdk-jav)
-
-* [Node SDK](https://github.com/Azure/azure-iot-sdk-node/wiki/Connectivity-and-Retries)
-
-* [Python SDK](https://github.com/Azure/azure-iot-sdk-python)
-
-## Design for resiliency
-
-IoT devices often rely on non-continuous or unstable network connections (for example, GSM or satellite). Errors can occur when devices interact with cloud-based services because of intermittent service availability and infrastructure-level or transient faults. An application that runs on a device has to manage the mechanisms for connection, re-connection, and the retry logic for sending and receiving messages. Also, the retry strategy requirements depend heavily on the device's IoT scenario, context, capabilities.
-
-The Azure IoT Hub device SDKs aim to simplify connecting and communicating from cloud-to-device and device-to-cloud. These SDKs provide a robust way to connect to Azure IoT Hub and a comprehensive set of options for sending and receiving messages. Developers can also modify existing implementation to customize a better retry strategy for a given scenario.
-
-The relevant SDK features that support connectivity and reliable messaging are covered in the following sections.
-
-## Connection and retry
-
-This section gives an overview of the re-connection and retry patterns available when managing connections. It details implementation guidance for using a different retry policy in your device application and lists relevant APIs from the device SDKs.
-
-### Error patterns
-
-Connection failures can happen at many levels:
-
-* Network errors: disconnected socket and name resolution errors
-
-* Protocol-level errors for HTTP, AMQP, and MQTT transport: detached links or expired sessions
-
-* Application-level errors that result from either local mistakes: invalid credentials or service behavior (for example, exceeding the quota or throttling)
-
-The device SDKs detect errors at all three levels. OS-related errors and hardware errors are not detected and handled by the device SDKs. The SDK design is based on [The Transient Fault Handling Guidance](/azure/architecture/best-practices/transient-faults#general-guidelines) from the Azure Architecture Center.
-
-### Retry patterns
-
-The following steps describe the retry process when connection errors are detected:
-
-1. The SDK detects the error and the associated error in the network, protocol, or application.
-
-1. The SDK uses the error filter to determine the error type and decide if a retry is needed.
-
-1. If the SDK identifies an **unrecoverable error**, operations like connection, send, and receive are stopped. The SDK notifies the user. Examples of unrecoverable errors include an authentication error and a bad endpoint error.
-
-1. If the SDK identifies a **recoverable error**, it retries according to the specified retry policy until the defined timeout elapses. Note that the SDK uses **Exponential back-off with jitter** retry policy by default.
-
-1. When the defined timeout expires, the SDK stops trying to connect or send. It notifies the user.
-
-1. The SDK allows the user to attach a callback to receive connection status changes.
-
-The SDKs typically provide three retry policies:
-
-* **Exponential back-off with jitter**: This default retry policy tends to be aggressive at the start and slow down over time until it reaches a maximum delay. The design is based on [Retry guidance from Azure Architecture Center](/azure/architecture/best-practices/retry-service-specific).
-
-* **Custom retry**: For some SDK languages, you can design a custom retry policy that is better suited for your scenario and then inject it into the RetryPolicy. Custom retry isn't available on the C SDK, and it is not currently supported on the Python SDK. The Python SDK reconnects as-needed.
-
-* **No retry**: You can set retry policy to "no retry", which disables the retry logic. The SDK tries to connect once and send a message once, assuming the connection is established. This policy is typically used in scenarios with bandwidth or cost concerns. If you choose this option, messages that fail to send are lost and can't be recovered.
-
-### Retry policy APIs
-
-| SDK | SetRetryPolicy method | Policy implementations | Implementation guidance |
-|||||
-| C | [IOTHUB_CLIENT_RESULT IoTHubDeviceClient_SetRetryPolicy](https://azure.github.io/azure-iot-sdk-c/iothub__device__client_8h.html#a53604d8d75556ded769b7947268beec8) | See: [IOTHUB_CLIENT_RETRY_POLICY](https://azure.github.io/azure-iot-sdk-c/iothub__client__core__common_8h.html#a361221e523247855ff0a05c2e2870e4a) | [C implementation](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/connection_and_messaging_reliability.md) |
-| Java | [SetRetryPolicy](/jav) |
-| .NET | [DeviceClient.SetRetryPolicy](/dotnet/api/microsoft.azure.devices.client.deviceclient.setretrypolicy) | **Default**: [ExponentialBackoff class](/dotnet/api/microsoft.azure.devices.client.exponentialbackoff)<BR>**Custom:** implement [IRetryPolicy interface](/dotnet/api/microsoft.azure.devices.client.iretrypolicy)<BR>**No retry:** [NoRetry class](/dotnet/api/microsoft.azure.devices.client.noretry) | [C# implementation](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/devdoc/retrypolicy.md) |
-| Node | [setRetryPolicy](/javascript/api/azure-iot-device/client#azure-iot-device-client-setretrypolicy) | **Default**: [ExponentialBackoffWithJitter class](/javascript/api/azure-iot-common/exponentialbackoffwithjitter)<BR>**Custom:** implement [RetryPolicy interface](/javascript/api/azure-iot-common/retrypolicy)<BR>**No retry:** [NoRetry class](/javascript/api/azure-iot-common/noretry) | [Node implementation](https://github.com/Azure/azure-iot-sdk-node/wiki/Connectivity-and-Retries) |
-| Python | Not currently supported | Not currently supported | Built-in connection retries: Dropped connections will be retried with a fixed 10 second interval by default. This functionality can be disabled if desired, and the interval can be configured. |
-
-## Next steps
-
-Suggested next steps include:
-
-* [Troubleshoot device disconnects](../iot-hub/iot-hub-troubleshoot-connectivity.md)
-
-* [Use the Azure IoT Device SDKs](./about-iot-sdks.md)
load-testing Concept Load Test App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-test-app-service.md
+
+ Title: Load test Azure App Service apps
+
+description: 'Learn how to use Azure Load Testing with Azure App Service apps. Run load tests, use environment variables, and gain insights with server metrics and diagnostics.'
++++ Last updated : 03/23/2023++++
+# Load test Azure App Service apps with Azure Load Testing
+
+This article shows how to use Azure Load Testing with applications hosted on Azure App Service. You learn how to run a load test to validate your application's performance. Use environment variables to make your load test more configurable. This feature allows you to reuse your load test across different deployment slots. During and after the test, you can get detailed insights by using server-side metrics and App Service diagnostics, which helps you to identify and troubleshoot any potential issues.
+
+[Azure App Service](/azure/app-service/overview) is a fully managed HTTP-based service that enables you to build, deploy, and scale web applications and APIs in the cloud. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.
+
+## Why load test Azure App Service web apps?
+
+Even though Azure App Service is a fully managed service for running applications, load testing can offer significant benefits in terms of reliability, performance, and cost optimization:
+
+- Validate that your application and all dependent application components can handle your expected load
+- Verify that your application meets your performance requirements, such as maximum response time or latency
+- Identify performance bottlenecks within your application
+- Do more with less: right-size your computing resources
+- Ensure that new releases don't introduce performance regressions
+
+Often, applications consist of multiple application components besides the app service application. For example, the application might use a database or other data storage solution, invoke dependent serverless functions, or use a caching solution for improving performance. Each of these application components contributes to the availability and performance of your overall application. By running a load test, you can validate that the entire application can support the expected user load without failures. Also, you can verify that the requests meet your performance and availability requirements.
+
+The application implementation and algorithms might affect application performance and stability under load. For example, storing data in memory might lead to excessive memory consumption and application stability issues. You can use load testing to perform a *soak test* and simulate sustained user load over a longer period of time, to identify such problems in the application implementation.
+
+Each application component has different options for allocating computing resources and scalability settings. For example, an app service always runs in an [App Service plan](/azure/app-service/overview-hosting-plans). An App Service plan defines a set of compute resources for a web app to run. Optionally, you can choose to enable [autoscaling](/azure/azure-monitor/autoscale/autoscale-overview) to automatically add more resources, based on specific metrics. With load testing, you can ensure that you add the right resources to match the characteristics of your application. For example, if your application is memory-intensive, you might choose compute instances that have more memory. Also, by [monitoring application metrics](./how-to-monitor-server-side-metrics.md) during the load test, you can also optimize costs by allocating the right type and amount of computing resources.
+
+By integrating load testing in your CI/CD pipeline and by [adding fail criteria to your load test](./how-to-define-test-criteria.md), you can quickly identify performance regressions introduced by application changes. For example, adding an external service call in the application might result in the overall response time to surpass your maximum response time requirement.
+
+## Create a load test for an app on Azure App Service
+
+Azure Load Testing enables you to create load tests for your application in two ways:
+
+- Create a URL-based quick test
+- Use an existing Apache JMeter script (JMX file)
+
+Use the quick test experience to create a load test for a specific endpoint URL, directly from within the Azure portal. For example, use the App Service web app *default domain* to perform a load test of the web application home page. You can specify basic load test configuration settings, such as the number of [virtual users](./concept-load-testing-concepts.md#virtual-users), test duration, and [ramp-up time](./concept-load-testing-concepts.md#ramp-up-time). Azure Load Testing then generates the corresponding JMeter test script, and runs it against your endpoint. You can modify the test script and configuration settings at any time. Get started by [creating a URL-based load test](./quickstart-create-and-run-load-test.md).
++
+Alternately, create a new load test by uploading an existing JMeter script. Use this approach to load test multiple pages or endpoints in a single test, to test authenticated endpoints, use parameters in the test script, or to use more advanced load patterns. Azure Load Testing provides high-fidelity support of JMeter to enable you to reuse existing load test scripts. Learn how to [create a load test by using an existing JMeter script](./how-to-create-and-run-load-test-with-jmeter-script.md).
+
+If you're just getting started with Azure Load Testing, you might [create a quick test](./quickstart-create-and-run-load-test.md) first, and then further modify and extend the test script that Azure Load Testing generated.
+
+After you create and run your load test, Azure Load Testing provides a dashboard with test run statistics, such as [response time](./concept-load-testing-concepts.md#response-time), error percentage and [throughput](./concept-load-testing-concepts.md#requests-per-second-rps).
+
+## Use test fail criteria
+
+The Azure Load Testing dashboard provides insights about a specific load test run and how the application responds to simulated load. To verify that your application can meet your performance and availability requirements, specify *load test fail criteria*.
+
+Test fail criteria enable you to configure conditions for load test *client-side metrics*. If a load test run doesn't meet these conditions, the test is considered to fail. For example, specify that the average response time of requests, or that the percentage of failed requests is above a given threshold. You can add fail criteria to your load test at any time, regardless if it's a quick test or if you uploaded a JMeter script.
++
+When you run load tests as part of your CI/CD pipeline, you can use test fail criteria to quickly identify performance regressions with an application build.
+
+Learn how to [configure test fail criteria](./how-to-define-test-criteria.md) for your load test.
+
+## Monitor application metrics
+
+During a load test, Azure Load Testing collects [metrics](./concept-load-testing-concepts.md#metrics) about the test execution. The client-side metrics provide information about the test run, from a test-engine perspective. For example, the end-to-end response time, requests per second, or error percentage. These metrics give an overall indication whether the application can support the simulated user load.
+
+To get insights into the performance and stability of the application and its components, Azure Load Testing enables you to monitor application metrics, also referred to as *server-side metrics*. Monitoring application metrics help identify performance bottlenecks in your application, or indicate which components have too many or too few compute resources allocated.
+
+For applications hosted on Azure App Service, use App Service diagnostics to get extra insights into the performance and health of the application.
+
+### Server-side metrics in Azure Load Testing
+
+Azure Load Testing lets you monitor server-side metrics for your Azure app components when you run a load test. You can then visualize and analyze these metrics in the Azure Load Testing dashboard. Learn more about the [Azure resource types that Azure Load Testing supports](./resource-supported-azure-resource-types.md).
++
+In the load test configuration, select the list of Azure resources for your application components. When you add an Azure resource to your load test, Azure Load Testing automatically selects default resource metrics to monitor while running the load test. For example, when you add an App Service plan, Azure Load Testing monitors average CPU percentage and average memory percentage. You can add or remove resource metrics for your load test.
++
+Learn more about how to [monitor server-side metrics in Azure Load Testing](./how-to-monitor-server-side-metrics.md).
+
+### App Service diagnostics
+
+When the application you're load testing is hosted on Azure App Service, you can get extra insights by using [App Service diagnostics](/azure/app-service/overview-diagnostics). App Service diagnostics is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
+
+When you add an App Service application component to your load test configuration, the load testing dashboard provides a direct link to the App Service diagnostics dashboard for your App service resource.
++
+App Service diagnostics enables you to view in-depth information and dashboard about the performance, resource usage, and stability of your app service. In the screenshot, you notice that there are concerns about the CPU usage, app performance, and failed requests.
++
+> [!NOTE]
+> It can take up to 45 minutes for the insights data to be available in App Service diagnostics.
+
+## Parameterize your test for deployment slots
+
+[Azure App Service deployment slots](/azure/app-service/deploy-staging-slots) enable you to set up staging environments for your application. Each deployment slot has a separate URL. You can easily reuse your load testing script across multiple slots by using environment variables in the load test configuration.
+
+When you create a quick test, Azure Load Testing generates a generic JMeter script and uses environment variables to pass the target URL to the script.
++
+To use environment variables for passing the deployment slot URL to your JMeter test script, perform the following steps:
+
+1. Add an environment variable in the load test configuration.
+
+1. Reference the environment variable in your test script by using the `System.getenv` function.
+
+ ```xml
+ <elementProp name="domain" elementType="Argument">
+ <stringProp name="Argument.name">domain</stringProp>
+ <stringProp name="Argument.value">${__BeanShell( System.getenv("domain") )}</stringProp>
+ <stringProp name="Argument.metadata">=</stringProp>
+ </elementProp>
+ ```
+
+Learn how to [parameterize a load test by using environment variables](./how-to-parameterize-load-tests.md).
+
+You can also use environment variables to pass other configuration settings to the JMeter test script. For example, you might pass the number of virtual users, or the file name of a [CSV input file](./how-to-read-csv-data.md) to the test script.
+
+## Next steps
+
+- Get started by [creating a URL-based load test](./quickstart-create-and-run-load-test.md).
+- Learn how to [identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md) for Azure applications.
+- Learn how to [configure your test for high-scale load](./how-to-high-scale-load.md).
+- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-appservice-insights.md
- Title: Get load test insights from App Service diagnostics-
-description: 'Learn how to get detailed application performance insights from App Service diagnostics and Azure Load Testing.'
---- Previously updated : 10/24/2022----
-# Get performance insights from App Service diagnostics and Azure Load Testing
-
-Azure Load Testing collects detailed resource metrics across your Azure app components to help identify performance bottlenecks. In this article, you learn how to use App Service Diagnostics to get additional insights when load testing Azure App Service workloads.
-
-[App Service diagnostics](/azure/app-service/overview-diagnostics) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
-
-## Prerequisites
--- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).-- An Azure App Service workload that you're running a load test against and that you've added to the app components to monitor during the load test.-
-## Use App Service diagnostics for your load test
-
-Azure Load Testing lets you monitor server-side metrics for your Azure app components for a load test. You can then visualize and analyze these metrics in the Azure Load Testing dashboard.
-
-When the application you're load testing is hosted on Azure App Service, you can get extra insights by using [App Service diagnostics](/azure/app-service/overview-diagnostics).
-
-To view the App Service diagnostics information for your application under load test:
-
-1. Go to the [Azure portal](https://portal.azure.com).
-
-1. Add your App Service resource to the load test app components. Follow the steps in [monitor server-side metrics](./how-to-monitor-server-side-metrics.md) to add your app service.
-
- :::image type="content" source="media/how-to-appservice-insights/test-monitoring-app-service.png" alt-text="Screenshot of the Monitoring tab when editing a load test in the Azure portal, highlighting the App Service resource.":::
-
-1. Select **Run** to run the load test.
-
- After the test finishes, you'll notice a section about App Service on the test result dashboard.
-
- :::image type="content" source="media/how-to-appservice-insights/test-result-app-service-diagnostics.png" alt-text="Screenshot that shows the 'App Service' section on the load testing dashboard in the Azure portal.":::
-
-1. Select the link in **Additional insights** to view the App Service diagnostics information.
-
- App Service diagnostics enables you to view in-depth information and dashboard about the performance, resource usage, and stability of your app service.
-
- In the screenshot, you notice that there are concerns about the CPU usage, app performance, and failed requests.
-
- :::image type="content" source="media/how-to-appservice-insights/app-diagnostics-overview.png" alt-text="Screenshot that shows the App Service diagnostics overview page, with a list of interactive reports on the left pane.":::
-
- On the left pane, you can drill deeper into specific issues by selecting one the diagnostics reports. For example, the following screenshot shows the **High CPU Analysis** report.
-
- :::image type="content" source="media/how-to-appservice-insights/app-diagnostics-high-cpu.png" alt-text="Screenshot that shows the App Service diagnostics CPU usage report.":::
-
- The following screenshot shows the **Web App Slow** report, which gives details and recommendations about application performance.
-
- :::image type="content" source="media/how-to-appservice-insights/app-diagnostics-web-app-slow.png" alt-text="Screenshot that shows the App Service diagnostics slow application report.":::
-
- > [!NOTE]
- > It can take up to 45 minutes for the insights data to be displayed on this page.
-
-## Next steps
--- Learn how to [parameterize a load test with secrets and environment variables](./how-to-parameterize-load-tests.md).-- Learn how to [identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md) for Azure applications.-- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Troubleshoot Failing Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-troubleshoot-failing-test.md
For Azure-hosted applications, you can configure your load test to monitor resou
Learn how you can [monitor server-side application metrics in Azure Load Testing](./how-to-monitor-server-side-metrics.md).
-For application endpoints that you host on Azure App Service, you can [use App Service Insights to get additional insights](./how-to-appservice-insights.md) about the application behavior.
+For application endpoints that you host on Azure App Service, you can [use App Service Insights to get additional insights](./concept-load-test-app-service.md#monitor-application-metrics) about the application behavior.
## Next steps - Learn how to [Export the load test result](./how-to-export-test-results.md). - Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).-- Learn how to [Get detailed insights for Azure App Service based applications](./how-to-appservice-insights.md).
+- Learn how to [Get detailed insights for Azure App Service based applications](./concept-load-test-app-service.md#monitor-application-metrics).
- Learn how to [Compare multiple load test runs](./how-to-compare-multiple-test-runs.md).
load-testing Resource Supported Azure Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-supported-azure-resource-types.md
This section lists the Azure resource types that Azure Load Testing supports for
## Next steps * Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
-* Learn how to [Get more insights from App Service diagnostics](./how-to-appservice-insights.md).
* Learn how to [Compare multiple test runs](./how-to-compare-multiple-test-runs.md).
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
Learn how to access Azure resources from your scoring script with an online endpoint and either a system-assigned managed identity or a user-assigned managed identity.
-Managed endpoints allow Azure Machine Learning to manage the burden of provisioning your compute resource and deploying your machine learning model. Typically your model needs to access Azure resources such as the Azure Container Registry or your blob storage for inferencing; with a managed identity you can access these resources without needing to manage credentials in your code. [Learn more about managed identities](../active-directory/managed-identities-azure-resources/overview.md).
+Both managed endpoints and Kubernetes endpoints allow Azure Machine Learning to manage the burden of provisioning your compute resource and deploying your machine learning model. Typically your model needs to access Azure resources such as the Azure Container Registry or your blob storage for inferencing; with a managed identity you can access these resources without needing to manage credentials in your code. [Learn more about managed identities](../active-directory/managed-identities-azure-resources/overview.md).
This guide assumes you don't have a managed identity, a storage account or an online endpoint. If you already have these components, skip to the [give access permission to the managed identity](#give-access-permission-to-the-managed-identity) section.
This guide assumes you don't have a managed identity, a storage account or an on
## Limitations * The identity for an endpoint is immutable. During endpoint creation, you can associate it with a system-assigned identity (default) or a user-assigned identity. You can't change the identity after the endpoint has been created.
+* If your ARC and blob storage are configured as private, i.e. behind a Vnet, then access from the Kubernetes endpoint should be over the private link regardless of whether your workspace is public or private. More details about private link setting, please refer to [How to secure workspace vnet](./how-to-secure-workspace-vnet.md#azure-container-registry).
+ ## Configure variables for deployment
Then, get the Principal ID of the System-assigned managed identity:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=6-get-sai-details)]
-Next, give assign the `Storage Blob Data Reader` role to the endpoint. The Role Definition is retrieved by name and passed along with the Principal ID of the endpoint. The role is applied at the scope of the storage account created above and allows the endpoint to read the file.
+Next, assign the `Storage Blob Data Reader` role to the endpoint. The Role Definition is retrieved by name and passed along with the Principal ID of the endpoint. The role is applied at the scope of the storage account created above and allows the endpoint to read the file.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=6-give-permission-user-storage-account)]
Delete the User-assigned managed identity:
* To see which compute resources you can use, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). * For more on costs, see [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md). * For information on monitoring endpoints, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md).
-* For limitations for managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+* For limitations for managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning-managed online endpoint](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+* For limitations for Kubernetes endpoints, see [Manage and increase quotas for resources with Azure Machine Learning-kubernetes online endpoint](how-to-manage-quotas.md#azure-machine-learning-kubernetes-online-endpoints).
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
# Attach a Kubernetes cluster to Azure Machine Learning workspace + Once Azure Machine Learning extension is deployed on AKS or Arc Kubernetes cluster, you can attach the Kubernetes cluster to Azure Machine Learning workspace and create compute targets for ML professionals to use. ## Prerequisites
We support two ways to attach a Kubernetes cluster to Azure Machine Learning wor
### [Azure CLI](#tab/cli) -
-The following commands show how to attach an AKS and Azure Arc-enabled Kubernetes cluster, and use it as a compute target with managed identity enabled.
+The following CLI v2 commands show how to attach an AKS and Azure Arc-enabled Kubernetes cluster, and use it as a compute target with managed identity enabled.
**AKS cluster**
Attaching a Kubernetes cluster makes it available to your workspace for training
In the Kubernetes clusters tab, the initial state of your cluster is *Creating*. When the cluster is successfully attached, the state changes to *Succeeded*. Otherwise, the state changes to *Failed*. :::image type="content" source="media/how-to-attach-arc-kubernetes/kubernetes-creating.png" alt-text="Screenshot of attached settings for configuration of Kubernetes cluster.":::+
+### [Azure SDK](#tab/sdk)
+
+The following python SDK v2 code shows how to attach an AKS and Azure Arc-enabled Kubernetes cluster, and use it as a compute target with managed identity enabled.
+
+**AKS cluster**
+
+```python
+from azure.ai.ml import load_compute
+
+# for AKS cluster, the resource_id should be something like '/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerService/managedClusters/<CLUSTER_NAME>''
+compute_params = [
+ {"name": "<COMPUTE_NAME>"},
+ {"type": "kubernetes"},
+ {
+ "resource_id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerService/managedClusters/<CLUSTER_NAME>"
+ },
+]
+k8s_compute = load_compute(source=None, params_override=compute_params)
+ml_client.begin_create_or_update(k8s_compute).result()
+```
+
+**Arc Kubernetes cluster**
+
+```python
+from azure.ai.ml import load_compute
+
+# for arc connected cluster, the resource_id should be something like '/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerService/connectedClusters/<CLUSTER_NAME>''
+compute_params = [
+ {"name": "<COMPUTE_NAME>"},
+ {"type": "kubernetes"},
+ {
+ "resource_id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerService/connectedClusters/<CLUSTER_NAME>"
+ },
+]
+k8s_compute = load_compute(source=None, params_override=compute_params)
+ml_client.begin_create_or_update(k8s_compute).result()
+
+```
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
Any library that your scoring script requires to run needs to be indicated in th
__mnist/environment/conda.yml__ Refer to [Create a batch deployment](how-to-use-batch-endpoint.md#create-a-batch-deployment) for more details about how to indicate the environment for your model.
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
We list four typical extension deployment scenarios for reference. To deploy ext
For Azure Machine Learning extension deployment on AKS cluster, make sure to specify `managedClusters` value for `--cluster-type` parameter. Run the following Azure CLI command to deploy Azure Machine Learning extension: ```azurecli
- az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True InferenceRouterHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
``` - **Use Arc Kubernetes cluster outside of Azure for a quick proof of concept, to run training jobs only**
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-custom-output.md
In any of those cases, Batch Deployments allow you to take control of the output
## About this sample
-This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses a model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
-The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch` if you are using the Azure CLI or `sdk/endpoints/batch` if you are using our SDK for Python.
+The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli/endpoints/batch/deploy-models/custom-outputs-parquet` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet` if you are using our SDK for Python.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples/cli/endpoints/batch
+cd azureml-examples/cli/endpoints/batch/deploy-models/custom-outputs-parquet
``` ### Follow along in Jupyter Notebooks
-You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/custom-output-batch.ipynb).
+You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb).
## Prerequisites
Batch Endpoint can only deploy registered models. In this case, we already have
# [Azure CLI](#tab/cli) ```azurecli
-MODEL_NAME='heart-classifier'
-az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classifier-mlflow/model"
+MODEL_NAME='heart-classifier-sklpipe'
+az ml model create --name $MODEL_NAME --type "custom_model" --path "model"
``` # [Python](#tab/sdk)
az ml model create --name $MODEL_NAME --type "mlflow_model" --path "heart-classi
```python model_name = 'heart-classifier' model = ml_client.models.create_or_update(
- Model(name=model_name, path='heart-classifier-mlflow/model', type=AssetTypes.MLFLOW_MODEL)
+ Model(name=model_name, path='model', type=AssetTypes.CUSTOM_MODEL)
) ```
-> [!NOTE]
-> The model used in this tutorial is an MLflow model. However, the steps apply for both MLflow models and custom models.
- ### Creating a scoring script We need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. We are also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:
We need to create a scoring script that can read the input data provided by the
3. Appends the predictions to a `pandas.DataFrame` along with the input data. 4. Writes the data in a file named as the input file, but in `parquet` format.
-__batch_driver_parquet.py__
+__code/batch_driver.py__
-```python
-import os
-import mlflow
-import pandas as pd
-from pathlib import Path
-
-def init():
- global model
- global output_path
-
- # AZUREML_MODEL_DIR is an environment variable created during deployment
- # It is the path to the model folder
- # Please provide your model's folder name if there's one:
- model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
- output_path = os.environ['AZUREML_BI_OUTPUT_PATH']
- model = mlflow.pyfunc.load_model(model_path)
-
-def run(mini_batch):
- for file_path in mini_batch:
- data = pd.read_csv(file_path)
- pred = model.predict(data)
-
- data['prediction'] = pred
-
- output_file_name = Path(file_path).stem
- output_file_path = os.path.join(output_path, output_file_name + '.parquet')
- data.to_parquet(output_file_path)
-
- return mini_batch
-```
__Remarks:__ * Notice how the environment variable `AZUREML_BI_OUTPUT_PATH` is used to get access to the output path of the deployment job.
Follow the next steps to create a deployment using the previous scoring script:
No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file.
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml" range="8-11":::
+
# [Python](#tab/sdk) Let's get a reference to the environment: ```python environment = Environment(
- conda_file="./heart-classifier-mlflow/environment/conda.yaml",
- image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
+ name="batch-mlflow-xgboost",
+ conda_file="environment/conda.yaml",
+ image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
) ```
-2. MLflow models don't require you to indicate an environment or a scoring script when creating the deployments as it is created for you. However, in this case we are going to indicate a scoring script and environment since we want to customize how inference is executed.
+2. Create the deployment
> [!NOTE] > This example assumes you have an endpoint created with the name `heart-classifier-batch` and a compute cluster with name `cpu-cluster`. If you don't, please follow the steps in the doc [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md).
Follow the next steps to create a deployment using the previous scoring script:
To create a new deployment under the created endpoint, create a `YAML` configuration like the following:
- ```yaml
- $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
- endpoint_name: heart-classifier-batch
- name: classifier-xgboost-parquet
- description: A heart condition classifier based on XGBoost
- model: azureml:heart-classifier@latest
- environment:
- image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
- conda_file: ./heart-classifier-mlflow/environment/conda.yaml
- code_configuration:
- code: ./heart-classifier-custom/code/
- scoring_script: batch_driver_parquet.py
- compute: azureml:cpu-cluster
- resources:
- instance_count: 2
- max_concurrency_per_instance: 2
- mini_batch_size: 2
- output_action: summary_only
- retry_settings:
- max_retries: 3
- timeout: 300
- error_threshold: -1
- logging_level: info
- ```
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml":::
Then, create the deployment with the following command:
- ```azurecli
- DEPLOYMENT_NAME="classifier-xgboost-parquet"
- az ml batch-deployment create -f endpoint.yml
- ```
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="create_batch_deployment_set_default" :::
# [Python](#tab/sdk)
Follow the next steps to create a deployment using the previous scoring script:
model=model, environment=environment, code_configuration=CodeConfiguration(
- code="./heart-classifier-mlflow/code/",
- scoring_script="batch_driver_parquet.py",
+ code="code/",
+ scoring_script="batch_driver.py",
), compute=compute_name, instance_count=2,
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
To determine the current usage for an endpoint, [view the metrics](how-to-monito
To request an exception from the Azure Machine Learning product team, use the steps in the [Request quota increases](#request-quota-increases).
+### Azure Machine Learning kubernetes online endpoints
+
+Azure Machine Learning kubernetes online endpoints have limits described in the following table.
+
+| **Resource** | **Limit** |
+| | |
+| Endpoint name| Same as [managed online endpoint](#azure-machine-learning-managed-online-endpoints) |
+| Deployment name| Same as [managed online endpoint](#azure-machine-learning-managed-online-endpoints)|
+| Number of endpoints per subscription | 50 |
+| Number of deployments per subscription | 200 |
+| Number of deployments per endpoint | 20 |
+| Max request time-out at endpoint level | 300 seconds |
+
+The sum of kubernetes online endpoints and managed online endpoints under each subscription cannot exceed 50. Similarly, the sum of kubernetes online deployments and managed online deployments under each subscription cannot exceed 200.
### Azure Machine Learning pipelines [Azure Machine Learning pipelines](concept-ml-pipelines.md) have the following limits.
When you're requesting a quota increase, select the service that you have in min
1. Scroll to **Machine Learning Service: Virtual Machine Quota**.
- :::image type="content" source="./media/how-to-manage-quotas/virtual-machine-quota.png" lightbox="./media/how-to-manage-quotas/virtual-machine-quota.png" alt-text="Screenshot of the VM quota details form.":::
+ :::image type="content" source="./media/how-to-manage-quotas/virtual-machine-quota.png" lightbox="./media/how-to-manage-quotas/virtual-machine-quota.png" alt-text="Screenshot of the VM quota details.":::
-2. Under **Additonal Details** specify the request details with the number of additional vCPUs required to run your Machine Learning Endpoint.
+2. Under **Additional Details** specify the request details with the number of additional vCPUs required to run your Machine Learning Endpoint.
- :::image type="content" source="./media/how-to-manage-quotas/vm-quota-request-additional-info.png" lightbox="./media/how-to-manage-quotas/vm-quota-request-additional-info.png" alt-text="Screenshot of the VM quota additional details form.":::
+ :::image type="content" source="./media/how-to-manage-quotas/vm-quota-request-additional-info.png" lightbox="./media/how-to-manage-quotas/vm-quota-request-additional-info.png" alt-text="Screenshot of the VM quota additional details.":::
> [!NOTE] > [Free trial subscriptions](https://azure.microsoft.com/offers/ms-azr-0044p) are not eligible for limit or quota increases. If you have a free trial subscription, you can upgrade to a [pay-as-you-go](https://azure.microsoft.com/offers/ms-azr-0003p/) subscription. For more information, see [Upgrade Azure free trial to pay-as-you-go](../cost-management-billing/manage/upgrade-azure-subscription.md) and [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq).
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
This is a list of common deployment errors that are reported as part of the depl
* [ResourceNotFound](#error-resourcenotfound) * [OperationCanceled](#error-operationcanceled)
-If you are creating or updating a Kubernetes online deployment, you can see [Common errors specific to Kubernetes deployments](#).
+If you are creating or updating a Kubernetes online deployment, you can see [Common errors specific to Kubernetes deployments](#common-errors-specific-to-kubernetes-deployments).
### ERROR: ImageBuildFailure
When you are creating a managed online endpoint, role assignment is required for
Try to delete some unused endpoints in this subscription. If all of your endpoints are actively in use, you can try [requesting an endpoint quota increase](how-to-manage-quotas.md#endpoint-quota-increases).
+For Kubernetes online endpoints, there is the endpoint quota boundary at the cluster level as well, you can check the [Kubernetes online endpoint quota](how-to-manage-quotas.md#azure-machine-learning-kubernetes-online-endpoints) section for more details.
+
+#### Kubernetes quota
+
+This issue happens when the requested CPU or memory couldn't be satisfied due to all nodes are unschedulable for this deployment, such as nodes are cordoned or nodes are unavailable.
+
+The error message will typically indicate the resource insufficient in cluster, for example, `OutOfQuota: Kubernetes unschedulable. Details:0/1 nodes are available: 1 Too many pods...`, which means that there are too many pods in the cluster and not enough resources to deploy the new model based on your request.
+
+You can try the following mitigation to address this issue:
+* For IT ops who maintain the Kubernetes cluster, you can try to add more nodes or clear some unused pods in the cluster to release some resources.
+* For machine learning engineers who deploy models, you can try to reduce the resource request of your deployment:
+ * If you directly define the resource request in the deployment configuration via resource section, you can try to reduce the resource request.
+ * If you use `instance type` to define resource for model deployment, you can contact the IT ops to adjust the instance type resource configuration, more detail you can refer to [How to manage Kubernetes instance type](how-to-manage-kubernetes-instance-types.md).
++ #### Region-wide VM capacity Due to a lack of Azure Machine Learning capacity in the region, the service has failed to provision the specified VM size. Retry later or try deploying to a different region.
Use the **Endpoints** in the studio:
1. Select the **Deployment logs** tab in the endpoint's details page. 1. Use the dropdown to select the deployment whose log you want to see.
-#### Kubernetes quota
-
-This issue happens when the requested CPU or memory couldn't be satisfied due to all nodes are unschedulable for this deployment, such as nodes are cordoned or nodes are unavailable.
-
-The error message will typically indicate which resource you need more of. For instance, if you see an error message detailing `0/3 nodes are available: 3 Insufficient nvidia.com/gpu`, that means that the service requires GPUs and there are three nodes in the cluster that don't have sufficient GPUs. This can be addressed by adding more nodes if you're using a GPU SKU, switching to a GPU-enabled SKU if you aren't, or changing your environment to not require GPUs.
-
-You can also try adjusting your request in the cluster, you can directly [adjust the resource request of the instance type](how-to-manage-kubernetes-instance-types.md).
--
+-
### ERROR: BadArgument
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
cd azureml-examples/cli/endpoints/batch
### Follow along in Jupyter Notebooks
-You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mnist-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/mnist-batch.ipynb).
+You can follow along this sample in the following notebooks. In the cloned repository, open the notebook: [mnist-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/mnist-classifier/mnist-batch.ipynb).
## Prerequisites
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
# [Azure CLI](#tab/azure-cli)
- The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`.
+ The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint).
- __mnist-endpoint.yml__
+ __endpoint.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-endpoint.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/endpoint.yml":::
The following table describes the key properties of the endpoint. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
| `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.| | `description` | The description of the batch endpoint. This property is optional. | | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
- | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
# [Python](#tab/python)
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
- :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_batch_endpoint" :::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_batch_endpoint" :::
# [Python](#tab/python)
A deployment is a set of resources required for hosting the model that does the
> [!WARNING] > If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
- __mnist/code/batch_driver.py__
+ __deployment-torch/code/batch_driver.py__
- :::code language="python" source="~/azureml-examples-main/sdk/python/endpoints/batch/mnist/code/batch_driver.py" :::
+ :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/code/batch_driver.py" :::
1. Create an environment where your batch deployment will run. Such environment needs to include the packages `azureml-core` and `azureml-dataset-runtime[fuse]`, which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yml`:
- __mnist/environment/conda.yml__
+ __deployment-torch/environment/conda.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist/environment/conda.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/environment/conda.yml":::
> [!IMPORTANT] > The packages `azureml-core` and `azureml-dataset-runtime[fuse]` are required by batch deployments and should be included in the environment dependencies.
A deployment is a set of resources required for hosting the model that does the
The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-torch-deployment.yml" range="10-12":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml" range="10-13":::
# [Python](#tab/python)
A deployment is a set of resources required for hosting the model that does the
```python env = Environment(
- conda_file="./mnist/environment/conda.yml",
+ conda_file="deployment-torch/environment/conda.yml",
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest", ) ```
A deployment is a set of resources required for hosting the model that does the
On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps: 1. Navigate to the __Environments__ tab on the side menu.
+
1. Select the tab __Custom environments__ > __Create__.
+
1. Enter the name of the environment, in this case `torch-batch-env`.
+
1. On __Select environment type__ select __Use existing docker image with conda__.
- 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
- 1. On __Customize__ section copy the content of the file `./mnist/environment/conda.yml` included in the repository into the portal.
+
+ 1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
+
+ 1. On __Customize__ section copy the content of the file `deployment-torch/environment/conda.yml` included in the repository into the portal.
+
1. Select __Next__ and then on __Create__.
+
1. The environment is ready to be used.
A deployment is a set of resources required for hosting the model that does the
__mnist-torch-deployment.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-torch-deployment.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml":::
For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
A deployment is a set of resources required for hosting the model that does the
description="A deployment using Torch to solve the MNIST classification dataset.", endpoint_name=batch_endpoint_name, model=model,
- code_path="./mnist/code/",
+ code_path="deployment-torch/code",
scoring_script="batch_driver.py", environment=env, compute=compute_name,
A deployment is a set of resources required for hosting the model that does the
On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps: 1. Navigate to the __Endpoints__ tab on the side menu.
+
1. Select the tab __Batch endpoints__ > __Create__.
+
1. Give the endpoint a name, in this case `mnist-batch`. You can configure the rest of the fields or leave them blank.
+
1. Select __Next__.
+
1. On the model list, select the model `mnist` and select __Next__.
+
1. On the deployment configuration page, give the deployment a name.
+
1. On __Output action__, ensure __Append row__ is selected.
+
1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+
1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+
1. On __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+
1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
+
1. Once done, select __Next__.
+
1. On environment, go to __Select scoring file and dependencies__ and select __Browse__.
- 1. Select the scoring script file on `/mnist/code/batch_driver.py`.
+
+ 1. Select the scoring script file on `deployment-torch/code/batch_driver.py`.
+
1. On the section __Choose an environment__, select the environment you created a previous step.
+
1. Select __Next__.
+
1. On the section __Compute__, select the compute cluster you created in a previous step. > [!WARNING] > Azure Kubernetes cluster are supported in batch deployments, but only when created using the Azure Machine Learning CLI or Python SDK. 1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we'll use 2.
+
1. Select __Next__. 1. Create the deployment:
A deployment is a set of resources required for hosting the model that does the
Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
- ::: az ml batch-deployment create --file mnist-torch-deployment.yml --endpoint-name $ENDPOINT_NAME --set-default :::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_batch_deployment_set_default" :::
> [!TIP] > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#adding-deployments-to-an-endpoint) section.
A deployment is a set of resources required for hosting the model that does the
Use `show` to check endpoint and deployment details. To check a batch deployment, run the following code:
- :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="check_batch_deployment_detail" :::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="check_batch_deployment_detail" :::
# [Python](#tab/python)
A deployment is a set of resources required for hosting the model that does the
# [Studio](#tab/azure-studio) 1. Navigate to the __Endpoints__ tab on the side menu.
+
1. Select the tab __Batch endpoints__.
+
1. Select the batch endpoint you want to get details from.
+
1. In the endpoint page, you'll see all the details of the endpoint along with all the deployments available. :::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
Invoking a batch endpoint triggers a batch scoring job. A job `name` will be ret
# [Azure CLI](#tab/azure-cli) # [Python](#tab/python)
job = ml_client.batch_endpoints.invoke(
# [Studio](#tab/azure-studio) 1. Navigate to the __Endpoints__ tab on the side menu.+ 1. Select the tab __Batch endpoints__.+ 1. Select the batch endpoint you just created.+ 1. Select __Create job__. :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
job = ml_client.batch_endpoints.invoke(
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job."::: 1. Select __Next__.+ 1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://azuremlexampledata.blob.core.windows.net/dat) for details. :::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
The batch scoring results are by default stored in the workspace's default blob
Use `output-path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `--set output_file_name=<your-file-name>` to configure a new output file name. # [Python](#tab/python)
Some settings can be overwritten when invoke to make best use of the compute res
# [Azure CLI](#tab/azure-cli) # [Python](#tab/python)
job = ml_client.batch_endpoints.invoke(
# [Studio](#tab/azure-studio) 1. Navigate to the __Endpoints__ tab on the side menu.+ 1. Select the tab __Batch endpoints__.+ 1. Select the batch endpoint you just created.+ 1. Select __Create job__. :::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring."::: 1. On __Deployment__, select the deployment you want to execute.+ 1. Select __Next__.+ 1. Check the option __Override deployment settings__. :::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
Batch scoring jobs usually take some time to process the entire set of inputs.
You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`. # [Python](#tab/python)
ml_client.jobs.get(job.name)
# [Studio](#tab/azure-studio) 1. Navigate to the __Endpoints__ tab on the side menu.+ 1. Select the tab __Batch endpoints__.+ 1. Select the batch endpoint you want to monitor.+ 1. Select the tab __Jobs__. :::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint."::: 1. You'll see a list of the jobs created for the selected endpoint.+ 1. Select the last job that is running.+ 1. You'll be redirected to the job monitoring page.
Follow the following steps to view the scoring results in Azure Storage Explorer
1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
- :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="show_job_in_studio" :::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
1. In the graph of the job, select the `batchscoring` step.+ 1. Select the __Outputs + logs__ tab and then select **Show data outputs**.+ 1. From __Data outputs__, select the icon to open __Storage Explorer__. :::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
In this example, you'll learn how to add a second deployment __that solves the s
```python env = Environment(
- conda_file="./mnist-keras/environment/conda.yml",
+ conda_file="deployment-kera/environment/conda.yml",
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest", ) ```
In this example, you'll learn how to add a second deployment __that solves the s
# [Studio](#tab/azure-studio) 1. Navigate to the __Environments__ tab on the side menu.
+
1. Select the tab __Custom environments__ > __Create__.
+
1. Enter the name of the environment, in this case `keras-batch-env`.
+
1. On __Select environment type__ select __Use existing docker image with conda__.
+
1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
+
1. On __Customize__ section copy the content of the file `./mnist-keras/environment/conda.yml` included in the repository into the portal.
+
1. Select __Next__ and then on __Create__.
+
1. The environment is ready to be used. The conda file used looks as follows:
- __mnist-keras/environment/conda.yml__
+ __deployment-keras/environment/conda.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras/environment/conda.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/environment/conda.yml":::
1. Create a scoring script for the model:
- __mnist-keras/code/batch_driver.py__
+ __deployment-keras/code/batch_driver.py__
- :::code language="python" source="~/azureml-examples-main/sdk/python/endpoints/batch/mnist-keras/code/batch_driver.py" :::
+ :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/code/batch_driver.py" :::
3. Create a deployment definition # [Azure CLI](#tab/azure-cli)
- __mnist-keras-deployment.yml__
+ __deployment-keras/deployment.yml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras-deployment.yml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/deployment.yml":::
# [Python](#tab/python)
In this example, you'll learn how to add a second deployment __that solves the s
description="this is a sample non-mlflow deployment", endpoint_name=batch_endpoint_name, model=model,
- code_path="./mnist-keras/code/",
+ code_path="deployment-keras/code/",
scoring_script="batch_driver.py", environment=env, compute=compute_name,
In this example, you'll learn how to add a second deployment __that solves the s
# [Studio](#tab/azure-studio) 1. Navigate to the __Endpoints__ tab on the side menu.
+
1. Select the tab __Batch endpoints__.
+
1. Select the existing batch endpoint where you want to add the deployment.
+
1. Select __Add deployment__. :::image type="content" source="./media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option."::: 1. On the model list, select the model `mnist` and select __Next__.
+
1. On the deployment configuration page, give the deployment a name.
+
1. On __Output action__, ensure __Append row__ is selected.
+
1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
+
1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
+
1. On __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
+
1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__. 1. Once done, select __Next__.
+
1. On environment, go to __Select scoring file and dependencies__ and select __Browse__.
- 1. Select the scoring script file on `/mnist-keras/code/batch_driver.py`.
+
+ 1. Select the scoring script file on `deployment-keras/code/batch_driver.py`.
+
1. On the section __Choose an environment__, select the environment you created a previous step.
+
1. Select __Next__.
+
1. On the section __Compute__, select the compute cluster you created in a previous step.
+
1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we'll use 2.
+
1. Select __Next__. 1. Create the deployment:
In this example, you'll learn how to add a second deployment __that solves the s
Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
- :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_new_deployment_not_default" :::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="create_new_deployment_not_default" :::
> [!TIP] > The `--set-default` parameter is missing in this case. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later.
To test the new non-default deployment, you'll need to know the name of the depl
# [Azure CLI](#tab/azure-cli) Notice `--deployment-name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
Notice `deployment_name` is used to specify the deployment we want to execute. T
# [Studio](#tab/azure-studio) 1. Navigate to the __Endpoints__ tab on the side menu.+ 1. Select the tab __Batch endpoints__.+ 1. Select the batch endpoint you just created.+ 1. Select __Create job__.+ 1. On __Deployment__, select the deployment you want to execute. In this case, `mnist-keras`.+ 1. Complete the job creation wizard to get the job started.
Although you can invoke a specific deployment inside of an endpoint, you'll usua
# [Azure CLI](#tab/azure-cli) # [Python](#tab/python)
ml_client.batch_endpoints.begin_create_or_update(endpoint)
# [Studio](#tab/azure-studio) 1. Navigate to the __Endpoints__ tab on the side menu.+ 1. Select the tab __Batch endpoints__.+ 1. Select the batch endpoint you want to configure.+ 1. Select __Update default deployment__. :::image type="content" source="./media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment."::: 1. On __Select default deployment__, select the name of the deployment you want to be the default one.+ 1. Select __Update__.+ 1. The selected deployment is now the default one.
ml_client.batch_endpoints.begin_create_or_update(endpoint)
If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion. Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted. # [Python](#tab/python)
ml_client.compute.begin_delete(name=compute_name)
# [Studio](#tab/azure-studio) 1. Navigate to the __Endpoints__ tab on the side menu.+ 1. Select the tab __Batch endpoints__.+ 1. Select the batch endpoint you want to delete.+ 1. Select __Delete__.+ 1. The endpoint all along with its deployments will be deleted.+ 1. Notice that this won't affect the compute cluster where the deployment(s) run.
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
Examples are available in the [examples GitHub repository](https://github.com/Az
## YAML: basic (MLflow) ## YAML: custom model and scoring code ## Next steps
machine-learning Reference Yaml Endpoint Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-batch.md
Examples are available in the [examples GitHub repository](https://github.com/Az
## YAML: basic ## Next steps
machine-learning How To Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-enable-data-collection.md
Once collection is enabled, the data you collect helps you:
* Retrain your model with the collected data.
+## Limitations
+
+* The model data collection feature can only work with Ubuntu 18.04 image.
+
+>[!IMPORTANT]
+>
+> As of 03/10/2023, the Ubuntu 18.04 image is now deprecated. **Support for Ubuntu 18.04 images will be dropped starting January 2023 when it reaches EOL on April 30, 2023.**
+>
+> The MDC feature is incompatible with any other image than Ubuntu 18.04, which is no available after the Ubuntu 18.04 image is deprecated.
+>
+> mMore information you can refer to:
+> * [openmpi3.1.2-ubuntu18.04 release-notes](https://github.com/Azure/AzureML-Containers/blob/master/base/cpu/openmpi3.1.2-ubuntu18.04/release-notes.md)
+> * [data science virtual machine release notes](../data-science-virtual-machine/release-notes.md#september-20-2022)
+
+>[!NOTE]
+>
+> The data collection feature is currently in preview, any preview features are not recommended for production workloads.
+ ## What is collected and where it goes The following data can be collected:
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
1. Certs signed by a CA. This can be a self-signed CA or even a public one. In this case we need the root CA certificate (refer to instructions on [preparing SSL certificates for production](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLCertWithCA.html)), and all intermediaries (if applicable).
- Optionally, if you have also implemented client-to-node certificates (see [here](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLClientToNode.html)), you also need to provide them in the same format when creating the hybrid cluster. See sample below.
+ Optionally, if you want to implement client-to-node certificate authentication as well, you need to provide the certificates in the same format when creating the hybrid cluster. See Azure CLI sample below - the certificates are provided in the `--client-certificates` parameter. This will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit cassandra.yaml settings).
> [!NOTE] > The value of the `delegatedManagementSubnetId` variable you will supply below is exactly the same as the value of `--scope` that you supplied in the command above:
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
As with CQLSH, connecting from an application using one of the supported [Apache
Disabling certificate verification is recommended because certificate verification will not work unless you map I.P addresses of your cluster nodes to the appropriate domain. If you have an internal policy which mandates that you do SSL certificate verification for any application, you can facilitate this by adding entries like `10.0.1.5 host1.managedcassandra.cosmos.azure.com` in your hosts file for each node. If taking this approach, you would also need to add new entries whenever scaling up nodes.
+### Configuring client certificates
+
+Configuring client certificates is optional. In general, there are two ways of creating certificates:
+
+- Self signed certs. This means a private and public (no CA) certificate for each node - in this case we need all public certificates.
+- Certs signed by a CA. This can be a self-signed CA or even a public one. In this case we need the root CA certificate (refer to [instructions on preparing SSL certificates](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLCertWithCA.html) for production), and all intermediaries (if applicable).
+
+If you want to implement client-to-node certificate authentication, you need to provide the certificates via Azure CLI. The below command will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit `cassandra.yaml` settings).
+
+ ```azurecli-interactive
+ resourceGroupName='<Resource_Group_Name>'
+ clusterName='<Cluster Name>'
+
+ az managed-cassandra cluster update \
+ --resource-group $resourceGroupName \
+ --cluster-name $clusterName \
+ --client-certificates /usr/csuser/clouddrive/rootCert.pem /usr/csuser/clouddrive/intermediateCert.pem
+ ```
+ ## Troubleshooting
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
As with CQLSH, connecting from an application using one of the supported [Apache
Disabling certificate verification is recommended because certificate verification will not work unless you map I.P addresses of your cluster nodes to the appropriate domain. If you have an internal policy which mandates that you do SSL certificate verification for any application, you can facilitate this by adding entries like `10.0.1.5 host1.managedcassandra.cosmos.azure.com` in your hosts file for each node. If taking this approach, you would also need to add new entries whenever scaling up nodes.
+### Configuring client certificates
+Configuring client certificates is optional. In general, there are two ways of creating certificates:
+
+- Self signed certs. This means a private and public (no CA) certificate for each node - in this case we need all public certificates.
+- Certs signed by a CA. This can be a self-signed CA or even a public one. In this case we need the root CA certificate (refer to [instructions on preparing SSL certificates](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLCertWithCA.html) for production), and all intermediaries (if applicable).
+
+If you want to implement client-to-node certificate authentication, you need to provide the certificates via Azure CLI. The below command will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit `cassandra.yaml` settings).
+
+ ```azurecli-interactive
+ resourceGroupName='<Resource_Group_Name>'
+ clusterName='<Cluster Name>'
+
+ az managed-cassandra cluster update \
+ --resource-group $resourceGroupName \
+ --cluster-name $clusterName \
+ --client-certificates /usr/csuser/clouddrive/rootCert.pem /usr/csuser/clouddrive/intermediateCert.pem
+ ```
## Clean up resources
network-watcher Network Watcher Packet Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-overview.md
To control the size of captured data and only capture required information, use
## Considerations
-There's a limit of 10,000 parallel packet capture sessions per region per subscription. This limit applies only to the sessions and doesn't apply to the saved packet capture files either locally on the VM or in a storage account. See the [Network Watcher service limits page](../azure-resource-manager/management/azure-subscription-service-limits.md#network-watcher-limits) for a full list of limits.
+There's a limit of 10,000 parallel packet capture sessions per region per subscription. This limit applies only to the sessions and doesn't apply to the saved packet capture files either locally on the VM or in a storage account. See the [Network Watcher service limits page](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-network-watcher-limits) for a full list of limits.
## Next steps
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
In this article, we will provide an overview and introduction to core concepts of flexible server deployment model. -- ## Overview Azure Database for PostgreSQL - Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and server configuration customizations based on the user requirements. The flexible server architecture allows users to collocate database engine with the client-tier for lower latency, choose high availability within a single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with ability to stop/start your server and burstable compute tier that is ideal for workloads that do not need full compute capacity continuously. The service currently supports community version of [PostgreSQL 11, 12, 13, and 14](./concepts-supported-versions.md). The service is currently available in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
postgresql Whats Happening To Postgresql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/whats-happening-to-postgresql-single-server.md
description: The Azure Database for PostgreSQL single server service is being de
Previously updated : 03/29/2023 Last updated : 03/30/2023
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
-**Azure Database for PostgreSQL - Single Server is on the retirement path** is on the retirement path and is scheduled for retirement by March 28, 2025.
+**Azure Database for PostgreSQL - Single Server is on the retirement path** and is scheduled for retirement by March 28, 2025.
-Azure Database for PostgreSQL ΓÇô Single Server generally became available in 2018. However, given customer feedback and new advancements in the computation, availability, scalability, and performance capabilities in the Azure database landscape, the Single Server offering needs to be retired and upgraded with a new architecture ΓÇô Azure Database for PostgreSQL Flexible Server to bring you the best of AzureΓÇÖs open-source database platform.
+Azure Database for PostgreSQL ΓÇô Single Server generally became available in 2018. Given customer feedback and new advancements in the computation, availability, scalability, and performance capabilities in the Azure database landscape, the Single Server offering needs to be retired and upgraded with a new architecture. Azure Database for PostgreSQL - Flexible Server is the next generation of the service and brings you the best of Azure open-source database platform.
-As part of this retirement, we no longer support creating new Single Server instances from the Azure portal beginning November 30, 2023. If you need to create Single Server instances to meet business continuity needs, you can continue to use Azure CLI,
+As part of this retirement, we no longer support creating new single server instances from the Azure portal beginning November 30, 2023. If you need to create single server instances to meet business continuity needs, you can continue to use Azure CLI.
-If you currently have an Azure Database for PostgreSQL - Single Server service hosting production servers, we're glad to inform you that you can migrate your Azure Database for PostgreSQL - Single Server servers to the Azure Database for PostgreSQL - Flexible Server service.
+If you currently have an Azure Database for PostgreSQL - Single Server service hosting production servers, we're glad to inform you that you can migrate your Azure Database for PostgreSQL - Single Server to the Azure Database for PostgreSQL - Flexible Server.
-Azure Database for PostgreSQL - Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. For more information about flexible servers, visit **[Azure Database for PostgreSQL - Flexible Server](../flexible-server/overview.md)**.
+Azure Database for PostgreSQL - Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. For more information about Azure Database for PostgreSQL - Flexible Server, visit **[Azure Database for PostgreSQL - Flexible Server](../flexible-server/overview.md)**.
-## Migrate from a single server to a flexible server
+## Migrate from Azure Database for PostgreSQL - Single Server to Azure Database for PostgreSQL - Flexible Server
-Learn how to migrate from Azure Database for PostgreSQL - Single Server to Azure Database for PostgreSQL - Flexible Server using the [Single to Flexible Server Migration Tool](../migrate/concepts-single-to-flexible.md).
+Learn how to migrate from Azure Database for PostgreSQL - Single Server to Azure Database for PostgreSQL - Flexible Server using the [Single Server to Flexible Server migration tool](../migrate/concepts-single-to-flexible.md).
## Frequently Asked Questions (FAQs)
-**Q. Why is Azure Database for PostgreSQL-single server being retired?**
+**Q. Why is Azure Database for PostgreSQL- Single Server being retired?**
-**A.** Azure Database for PostgreSQL - single server generally became available in 2018. However, given customer feedback and new advancements in the computation, availability, scalability, and performance capabilities in the Azure database landscape, the single server offering needs to be retired and upgraded with a new architecture ΓÇô Azure Database for PostgreSQL flexible server to bring you the best of Azure's open-source database platform.
+**A.** Azure Database for PostgreSQL ΓÇô Single Server generally became available in 2018. Given customer feedback and new advancements in the computation, availability, scalability, and performance capabilities in the Azure database landscape, the Single Server offering needs to be retired and upgraded with a new architecture. Azure Database for PostgreSQL - Flexible Server is the next generation of the service and brings you the best of Azure open-source database platform.
**Q. Why am I being asked to migrate to Azure Database for PostgreSQL - Flexible Server?**
-**A.** [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/overview) is the best platform for running all your open-source PostgreSQL workloads on Azure. Azure Database for PostgreSQL- flexible server is economical, provides better performance across all service tiers, and more ways to control your costs for cheaper and faster disaster recovery. Other improvements to the flexible server include:
+**A.** [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/overview) is the best platform for running all your open-source PostgreSQL workloads on Azure. Azure Database for PostgreSQL - Flexible Server is economical, provides better performance across all service tiers, and more ways to control your costs for cheaper and faster disaster recovery. Other improvements to the flexible server include:
- Support for Postgres version 11 and newer, plus built-in security enhancements - Better price performance with support for burstable tier compute options. - Improved uptime by configuring hot standby on the same or a different availability zone and user-controlled maintenance windows. - A simplified developer experience for high-performance data workloads.
-**Q. How soon must I migrate my single server to a flexible server?**
+**Q. How soon must I migrate my Single Server to a Flexible Server?**
-**A.** Azure Database for PostgreSQL - Single Server is scheduled for retirement by March 28, 2025, so we strongly recommend migrating your single server to a flexible server at the earliest opportunity to ensure ample time to run through the migration lifecycle and use the benefits offered by the flexible server.
+**A.** Azure Database for PostgreSQL - Single Server is scheduled for retirement by March 28, 2025, so we strongly recommend migrating your Single Server to a Flexible Server at the earliest opportunity to ensure ample time to run through the migration lifecycle and use the benefits offered by the Flexible Server.
**Q. What happens to my existing Azure Database for PostgreSQL - Single Server instances?**
-**A.** Your existing Azure Database for PostgreSQL - Single Server workloads continue to be supported until March'2025.
+**A.** Your existing Azure Database for PostgreSQL - Single Server workloads will continue to be supported until March 2025.
-**Q. Can I still create a new version 11 Azure Database for PostgreSQL single servers after the community EOL date in November 2023?**
+**Q. Can I still create a new version 11 Azure Database for PostgreSQL - Single Server after the community EOL date in November 2023?**
-**A.** Beginning November 9, 2023, you'll no longer be able to create new single server instances for PostgreSQL version 11 through the Azure portal. However, you can still [make them via CLI until November 2024](https://azure.microsoft.com/updates/singlepg11-retirement/). We continue to support single servers through our [versioning support policy.](/azure/postgresql/single-server/concepts-version-policy) It would be best to start migrating to Azure Database for PostgreSQL - Flexible Server immediately.
+**A.** Beginning November 9 2023, you'll no longer be able to create new single server instances for PostgreSQL version 11 through the Azure portal. However, you can still [make them via CLI until November 2024](https://azure.microsoft.com/updates/singlepg11-retirement/). We will continue to support single servers through our [versioning support policy.](/azure/postgresql/single-server/concepts-version-policy) It would be best to start migrating to Azure Database for PostgreSQL - Flexible Server immediately.
-**Q. Can I continue running my Azure Database for PostgreSQL - Single Server instances beyond the sunset date of March 28, 2025?**
+**Q. Can I continue running my Azure Database for PostgreSQL - Single Server beyond the sunset date of March 28, 2025?**
-**A.** We plan to support a single server at the sunset date of March 28, 2025, and we strongly advise that you start planning your migration as soon as possible. We plan to end support for single server deployments at the sunset data of March 28, 2025.
+**A.** We plan to support Single Server at the sunset date of March 28, 2025, and we strongly advise that you start planning your migration as soon as possible. We plan to end support for single server deployments at the sunset data of March 28, 2025.
-**Q. After the single server retirement announcement, what if I still need to create a new single server to meet my business needs?**
+**Q. After the Single Server retirement announcement, what if I still need to create a new single server to meet my business needs?**
-**A.** We aren't stopping the ability to create new single servers immediately, so you can continue to create new single servers through CLI to meet your business needs for all Postgres versions supported on Azure Database for PostgreSQL ΓÇô single server. We strongly encourage you to explore a flexible server for the scenario and see if that can meet the need. Don't hesitate to contact us if necessary so we can better guide you in these scenarios and suggest the best path forward.
+**A.** We aren't stopping the ability to create new single servers immediately, so you can continue to create new single servers through CLI to meet your business needs for all PostgresSQL versions supported on Azure Database for PostgreSQL ΓÇô Single Server. We strongly encourage you to explore Flexible Server and see if that will meet your needs. Don't hesitate to contact us if necessary so we can guide you and suggest the best path forward for you.
**Q. Are there any additional costs associated with performing the migration?**
-**A.** You pay for the target flexible server and the source single server during the migration. The configuration and computing of the target flexible server determine the extra costs incurred (see [Pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) for more details). Once you've decommissioned the source single server after a successful migration, you only pay for your flexible server. There's no extra cost to use the single server to flexible server migration tool. If you have questions or concerns about the cost of migrating your single server to a flexible server, contact your Microsoft account representative.
+**A.** You pay for the target flexible server and the source single server during the migration. The configuration and computing of the target flexible server will determine the extra costs incurred (see [Pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) for more details). Once you've decommissioned the source single server after a successful migration, you only pay for your flexible server. Using the Single Server to Flexible Server migration tool doesn't cost extra. If you have questions or concerns about the cost of migrating your single server to a flexible server, contact your Microsoft account representative.
**Q. Will my billing be affected by running Azure Database for PostgreSQL - Flexible Server instead of Azure Database for PostgreSQL - Single Server?**
-**A.** The billing should be comparable if you choose a similar configuration to your Azure Database for PostgreSQL - Single Server. However, if you select the same zone or zone redundant with high availability for the target flexible server, your bill is higher than on a single server. Same zone or zone redundant high availability requires an additional hot standby server to be spun up and store redundant backup data, hence the added cost for the second server. This architecture enables reduced downtime during unplanned outages and planned maintenance. Generally speaking, flexible server provides better price performance. However, this is dependent on your workload.
+**A.** The billing should be comparable if you choose a similar configuration to your Azure Database for PostgreSQL - Single Server. However, if you select the same zone or zone redundant with high availability for the target flexible server, your bill will be higher than it was on your single server. Same zone or zone redundant high availability requires an additional hot standby server to be spun up and store redundant backup data, hence the added cost for the second server. This architecture enables reduced downtime during unplanned outages and planned maintenance. Generally speaking, Flexible Server provides better price performance, however, this is dependent on your workload.
-**Q. Will I incur downtime when I migrate my Azure Database from PostgreSQL - single server to a flexible server?**
+**Q. Will I incur downtime when I migrate my Azure Database from PostgreSQL - Single Server to a Flexible Server?**
-**A.** Currently, The Single to Flexible Server Migration Tool only supports offline migrations, and support for online migration is coming soon. Offline migration requires downtime to your applications during the migration process. [Learn more about The Single to Flexible Server Migration Tool](../migrate/concepts-single-to-flexible.md).
+**A.** Currently, the Single Server to Flexible Server migration tool only supports offline migrations. Offline migration requires downtime to your applications during the migration process. Visit Single Server to Flexible Server migration tool](../migrate/concepts-single-to-flexible.md) for more information.
Downtime depends on several factors, including the number of databases, size of your databases, number of tables inside each database, number of indexes, and the distribution of data across tables. It also depends on the SKU of the source and target server and the IOPS available on the source and target server.
Offline migrations are less complex, with few chances of failure, and are the re
You can contact your account teams if downtime requirements aren't met by the Offline migrations provided by a single server to a Flexible migration tool.
-**Q. Will there be future updates to single server to support the latest PostgreSQL versions?**
+> [!NOTE]
+> Support for online migration is coming soon.
+
+**Q. Will there be future updates to Single Server to support the latest PostgreSQL versions?**
-**A.** We recommend you migrate to flexible server if you must run on the latest PostgreSQL engine versions. We continue to deploy minor versions released by the community for Postgres version 11 until it's retired by the community in Nov'2023.
+**A.** We recommend you migrate to Flexible Server if you must run on the latest PostgreSQL engine versions. We continue to deploy minor versions released by the community for Postgres version 11 until it's retired by the community in Nov'2023.
> [!NOTE]
-> We're extending support for Postgres version 11 past the community retirement date and will support PostgreSQL version 11 on both [single server](https://azure.microsoft.com/updates/singlepg11-retirement/) and [flexible server](https://azure.microsoft.com/updates/flexpg11-retirement/) to ease this transition. Consider migrating to a flexible server to use the benefits of the latest Postgres engine versions.
+> We're extending support for Postgres version 11 past the community retirement date and will support PostgreSQL version 11 on both [Single Server](https://azure.microsoft.com/updates/singlepg11-retirement/) and [Flexible Server](https://azure.microsoft.com/updates/flexpg11-retirement/) to ease this transition. Consider migrating to Flexible Server to use the benefits of the latest Postgres engine versions.
-**Q. How does the flexible server's 99.99% availability SLA differ from a single server?**
+**Q. How does the Flexible Server 99.99% availability SLA differ from Single Server?**
-**A.** Flexible server's zone-redundant deployment provides 99.99% availability with zonal-level resiliency, and a single server delivers 99.99% availability but without zonal resiliency. Flexible server's High Availability (HA) architecture deploys a hot standby server with redundant compute and storage (with each site's data stored in 3x copies). A single server's HA architecture doesn't have a passive hot standby to help recover from zonal failures. Flexible server's HA architecture reduces downtime during unplanned outages and planned maintenance.
+**A.** Flexible Server zone-redundant deployment provides 99.99% availability with zonal-level resiliency, and Single Server delivers 99.99% availability but without zonal resiliency. Flexible Server High Availability (HA) architecture deploys a hot standby server with redundant compute and storage (with each site's data stored in 3x copies). A Single Server HA architecture doesn't have a passive hot standby to help recover from zonal failures. Flexible Server HA architecture reduces downtime during unplanned outages and planned maintenance.
-**Q. My single server is deployed in a region that doesn't support flexible servers. How should I proceed with migration?**
+**Q. My Single Server is deployed in a region that doesn't support Flexible Server. How should I proceed with migration?**
-**A.** We're close to regional parity with a single server. However, these are the regions with no flexible server presence.
+**A.** We're close to regional parity with a Single Server. These are the regions with no Flexible Server presence.
- China East (CE and CE2), - China North (CN and CN2)
You can contact your account teams if downtime requirements aren't met by the Of
We recommend migrating to CN3/CE3, Central India, and Sweden South regions.
-**Q. I have a private link configured for my single server, and this feature is not currently supported in flexible servers. How do I migrate?**
+**Q. I have a private link configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?**
-**A.** Flexible server support for private-link is our highest priority and on the roadmap. This feature is planned to launch in Q4 2023. Another option is to consider migrating to VNET injected flexible server.
+**A.** Flexible Server support for private-link is our highest priority and on the roadmap. This feature is planned to launch in Q4 2023. Another option is to consider migrating to VNET injected flexible server.
-**Q. Is there an option to roll back a single server to a flexible server migration?**
+**Q. Is there an option to roll back Single Server to a Flexible Server migration?**
**A.** You can perform any number of test migrations, test the success of your migration, and perform the final migration once you're ready. Test migrations don't affect the single server source, which remains operational until you perform the migration. If there are any errors during the test migration, you can postpone the final migration and keep your source server running. You can then reattempt the final migration after you resolve the errors. After you've performed a final migration to a flexible server and opened it up for the production workload, you'll lose the ability to go back to single server without incurring a data loss. **Q. How should I migrate my DB (> 1TB)**
-**A.** [The Single to Flexible Server Migration Tool](../migrate/concepts-single-to-flexible.md) can migrate databases of all sizes from a single server to a flexible server. The new version of the tool has no restrictions regarding the size of the databases.
+**A.** [The Single Server to Flexible Server migration tool](../migrate/concepts-single-to-flexible.md) can migrate databases of all sizes from a Single Server to a Flexible Server. The new version of the tool has no restrictions regarding the size of the databases.
**Q. Is cross-region migration supported?**
-**A.** Currently, The Single to Flexible Server Migration Tool doesn't support cross-region migrations. It is supported at a later point in time. You can use the pg_dump/pg_restore to perform migrations across regions.
+**A.** Currently, the Single Server to Flexible Server migration tool doesn't support cross-region migrations. It is supported at a later point in time. You can use the pg_dump/pg_restore to perform migrations across regions.
Cross-region data migrations should be avoided because the migration takes a long time to complete. A simpler way to do this will be to start a read-replica in the target GeoRegion, failover your application, and follow the steps outlined earlier. **Q. Is cross-subscription migration supported?**
-**A.** The Single to Flexible Server Migration Tool supports cross-subscription migrations.
+
+**A.** The Single Server to Flexible Server migration tool supports cross-subscription migrations.
**Q. Is cross-resource group subscription-supported?**
-**A.** The Single to Flexible Server Migration Tool supports cross-resource group migrations.
+
+**A.** the Single Server to Flexible Server migration tool supports cross-resource group migrations.
**Q. Is there cross-version support?**
-**A.** The single to flexible server Migration Service supports migrating from a lower PostgreSQL version (PG 9.5 and above) to any higher version. As always, application compatibility with higher PostgreSQL versions should be checked beforehand.
-### Single to Flexible Server Migration Tool
+**A.** The Single Server to Flexible Server migration service supports migrating from a lower PostgreSQL version (PG 9.5 and above) to any higher version. As always, application compatibility with higher PostgreSQL versions should be checked beforehand.
+
+### Single Server to Flexible Server migration tool
-The [Single to Flexible Server Migration Tool](/azure/postgresql/migrate/concepts-single-to-flexible) is a powerful tool that allows you to migrate your SQL Server database from a single server to a flexible server with ease. With this tool, you can easily move your database from an on-premises server or a virtual machine to a flexible server in the cloud, allowing you to take advantage of the scalability and flexibility of cloud computing.
+The [Single Server to Flexible Server migration tool](/azure/postgresql/migrate/concepts-single-to-flexible) is a powerful tool that allows you to migrate your SQL Server database from a single server to a flexible server with ease. With this tool, you can easily move your database from an on-premises server or a virtual machine to a flexible server in the cloud, allowing you to take advantage of the scalability and flexibility of cloud computing.
**Q. Which data, schema, and metadata components are migrated as part of the migration?**
-**A.** The Single to Flexible Server Migration Tool migrates schema, data, and metadata from the source to the destination. All the following data, schema, and metadata components are migrated as part of the database migration:
+**A.** the Single Server to Flexible Server migration tool migrates schema, data, and metadata from the source to the destination. All the following data, schema, and metadata components are migrated as part of the database migration:
Data Migration
Metadata Migration:
**Q. What's the difference between offline and online migration?**
-**A.** The Single to Flexible Server Migration Tool supports offline migration now, with online migrations coming soon. With an offline migration, application downtime starts when the migration begins. With an online migration, downtime is limited to the time required to cut over at the end of migration but uses a logical replication mechanism. Your Data/Schema must pass these [open-source PG engine restrictions](https://www.postgresql.org/docs/13/logical-replication-restrictions.html) for online migration. We suggest you test offline migration to determine whether the downtime is acceptable.
+**A.** the Single Server to Flexible Server migration tool supports offline migration now, with online migrations coming soon. With an offline migration, application downtime starts when the migration begins. With an online migration, downtime is limited to the time required to cut over at the end of migration but uses a logical replication mechanism. Your Data/Schema must pass these [open-source PG engine restrictions](https://www.postgresql.org/docs/13/logical-replication-restrictions.html) for online migration. We suggest you test offline migration to determine whether the downtime is acceptable.
Online and Offline migrations are compared in the following table:
Online and Offline migrations are compared in the following table:
| Downtime required | Small and fixed irrespective of the data size | Proportional to the data size and other factors. It could be as small as a few mins for smaller databases to a few hours for larger databases | | Migration time | Depends on the Database size and the write activity until cutover | Depends on the Database size |
-**Q. Are there any recommendations for optimizing the performance of The Single to Flexible Server Migration Tool?**
+**Q. Are there any recommendations for optimizing the performance of the Single Server to Flexible Server migration tool?**
**A.** Yes. To perform faster migrations, pick a higher SKU for your flexible server. Pick a minimum 4VCore or higher to complete the migration quickly. You can always change the SKU to match the application needs post-migration.
-**Q. How long does performing an offline migration with The Single to Flexible Server Migration Tool take?**
+**Q. How long does performing an offline migration with the Single Server to Flexible Server migration tool take?**
-**A.** The following table shows the time for performing offline migrations for databases of various sizes using The Single to Flexible Server Migration Tool. The migration was performed using a flexible server with the SKU:
+**A.** The following table shows the time for performing offline migrations for databases of various sizes using the Single Server to Flexible Server migration tool. The migration was performed using a flexible server with the SKU:
**Standard_D4ds_v4(4 cores, 16GB Memory, 128GB disk and 500 IOPS)**
Online and Offline migrations are compared in the following table:
| 1,000 GB | 07:00 | > [!NOTE]
-> The numbers above approximate the time taken to complete the migration. To get the precise time required for migrating to your Server, we strongly recommend taking a PITR (point in time restore) of your single server and running it against The Single to Flexible Server Migration Tool.
+> The numbers above approximate the time taken to complete the migration. To get the precise time required for migrating to your Server, we strongly recommend taking a PITR (point in time restore) of your single server and running it against the Single Server to Flexible Server migration tool.
-**Q. How long does performing an online migration with The Single to Flexible Server Migration Tool take?**
+**Q. How long does performing an online migration with the Single Server to Flexible Server migration tool take?**
**A.** Online migration involves the following steps:
The time taken for step #2 depends on the transactions that occur on the source.
### Additional support **Q. I have further questions about retirement.** + **A.** You can get further information in a few different ways. - Gett answers from community experts in [Microsoft Q&A](/answers/tags/214/azure-database-postgresql).
The time taken for step #2 depends on the transactions that occur on the source.
- For Problem subtype, select migrating from single to flexible server. > [!WARNING]
-> This article is not for Azure Database for PostgreSQL - Flexible Server users. It is for Azure Database for PostgreSQL - Single Server customers who need to upgrade to PostgreSQL - flexible server.
+> This article is not for Azure Database for PostgreSQL - Flexible Server users. It is for Azure Database for PostgreSQL - Single Server customers who need to upgrade to Azure Database for PostgreSQL - Flexible Server.
We know migrating services can be a frustrating experience, and we apologize in advance for any inconvenience this might cause you. You can choose what scenario best works for you and your environment.
private-5g-core Azure Private 5G Core Release Notes 2303 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2303.md
+
+ Title: Azure Private 5G Core 2303 release notes
+description: Discover what's new in the Azure Private 5G Core 2303 release
++++ Last updated : 03/29/2023++
+# Azure Private 5G Core 2303 release notes
+
+The following release notes identify the new features, critical open issues, and resolved issues for the 2303 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, please review the information contained in these release notes.
+
+This article applies to the AP5GC 2303 release (PMN-2303-0). This release is compatible with the ASE Pro 1 GPU and ASE Pro 2 running the ASE 2301 release, and is supported by the 2022-04-01-preview and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
+
+## Support
+
+The default support lifetime for a packet core version is roughly two calendar months from release.
+
+The support lifetime for version 2302 will end on May 31, 2023. Please be prepared to plan your packet core upgrade to a future version before 2303 goes out of support on this date.
+
+## What's new
+
+- **VLAN separation on ASE LAN and WAN ports** - This release delivers the ability to set VLAN tags on the external interfaces used by AP5GC, namely the S1-MME, N2, S1-U, N3, SGi, and N6 interfaces. VLAN tagging can also be set per data network configured on the N6/SGi interface, enabling layer 2 separation between these networks. For more details, see [Private mobile network design requirements](private-mobile-network-design-requirements.md).
+
+- **Azure Stack Edge Pro 2 support** - This release (and releases going forward) officially supports compatibility with Azure Stack Edge Pro 2 devices running the ASE 2301 release (and subsequent ASE releases). This and subsequent AP5GC releases support all three models of ASE Pro 2 (Model 64G2T, Model 128G4T1GPU, and Model 256G6T2GPU). All models of the ASE Pro 2 have four ports, instead of the six ports found on the ASE Pro 1 GPU.
+
+- **Web Proxy support** - This feature allows running AP5GC on an ASE configured to use a Web Proxy. For more details on how to configure a Web Proxy on ASE, see [Tutorial: Configure network for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
+
+## Issues fixed in the AP5GC 2303 release
+
+The following table provides a summary of issues fixed in this release.
+
+ |No. |Feature | Issue |
+ |--|--|--|
+ | 1 | Install/Upgrade | Modifications to the Attached Data Network resource in an existing deployment may cause service disruption unless the packet core is reinstalled. This issue has been fixed in this release. |
+ | 2 | 4G/5G Signaling | In rare scenarios of continuous high load over several days on a 4G multi-data network setup, the packet core may experience slight disruption resulting in some call failures. This issue has been fixed in this release. |
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+ |No. |Feature | Issue | Workaround/comments |
+ |--|--|--|--|
+ | 1 | Packet forwarding | AP5GC may not forward buffered packets if NAT is enabled.ΓÇ»| Not applicable. |
+ | 2 | Local dashboards | In some scenarios, the local dashboards don't show session rejection under the **Device and Session Statistics** panel if Session Establishment requests are rejected due to invalid PDU type (e.g. IPv6 when only IPv4 supported). | Not applicable. |
+ | 3 | Install/upgrade | Changing the technology type of a deployment from 4G (EPC) to 5G using upgrade or site delete and add is not supported.ΓÇ»| Please contact support for the required steps to change the technology type. |
+ | 4 | Packet forwarding | In scenarios of sustained high load (for example, continuous setup of 100s of TCP flows per second) combined with NAT pinhole exhaustion, AP5GC can encounter a memory leak, leading to a short period of service disruption resulting in some call failures. | In most cases, the system will recover automatically and be ready to handle new requests after a few seconds' disruption. UEs will need to re-establish any dropped connections. |
+ | 5 | Install/Upgrade | In some cases, the packet core reports successful installation even when the underlying platform or networking is misconfigured. | Not applicable. |
+
+## Next steps
+
+- [Upgrade the packet core instance in a site - Azure portal](upgrade-packet-core-azure-portal.md)
+- [Upgrade the packet core instance in a site - ARM template](upgrade-packet-core-arm-template.md)
private-5g-core Azure Stack Edge Disconnects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-disconnects.md
Last updated 11/30/2022
-# Azure Stack Edge disconnects
+# Temporary AP5GC disconnects
-There are several reasons why your *Azure Private 5G Core (AP5GC)* may have *Azure Stack Edge (ASE)* disconnects. These disconnects can either be unplanned short-term [Temporary disconnects](#temporary-disconnects) or periods of [Disconnected mode for up to two days](#disconnected-mode-for-azure-private-5g-core).
+Azure Stack Edge (ASE) can tolerate up to 5 days of unplanned connectivity issues. The following sections detail the behavior expected during these times and behavior after ASE connectivity resumes.
-## Temporary disconnects
+Throughout temporary disconnects, the **Azure Stack Edge overview** displays a banner stating `The device heartbeat is missing. Some operations will not be available in this state. Critical alert(s) present. Click here to view details.`
-ASE can tolerate small periods of unplanned connectivity issues. The following sections detail the behavior expected during these times and behavior after ASE connectivity resumes.
+> [!CAUTION]
+> Limited Azure Private 5G Core (AP5GC) support is available if you encounter issues while disconnected. If you encounter issues during a disconnect, we recommend you reconnect to enable full support. If it is not possible to reconnect, support is provided on a best-effort basis.
-Throughout any temporary disconnects, the **Azure Stack Edge overview** will display a banner stating `The device heartbeat is missing. Some operations will not be available in this state. Critical alert(s) present. Click here to view details.`
+While disconnected, AP5GC core functionality persists through ASE disconnects due to network issues, network equipment resets and temporary network equipment separation. During disconnects, the ASE management GUI displays several banners stating that it's currently disconnected and describing the impact on functions.
-### Configuration and provisioning actions during temporary disconnects
-
-It's common to see temporary failures such as timeouts of configuration and provisioning while ASE is online, but there is a connectivity issue. AP5GC can handle such events by automatically retrying configuration and provisioning actions once the ASE connectivity is restored. If ASE connectivity is not restored within 10 minutes or ASE is detected as being offline, ongoing operations will fail and you will need to repeat the action manually once ASE reconnects.
+## Unsupported functions during disconnects
-The **Sim overview** and **Sim Policy overview** blades display provisioning status of the resource in the site. This allows you to monitor the progress of provisioning actions. Additionally, the **Packet core control plane overview** displays the **Installation state** which can be used to monitor changes due to configuration actions.
+The following functions are not supported while disconnected:
-### ASE behavior after connectivity resumes
+- Deployment of the packet core
+- Updating the packet core version
+- Updating SIM configuration
+- Updating NAT configuration
+- Updating service policy
+- Provisioning SIMs
-Once ASE connectivity resumes, several features will resume:
+### Monitoring and troubleshooting during disconnects
-- ASE management will resume immediately.-- **Resource Health** will be viewable immediately.-- **Workbooks** will be viewable immediately and will populate for the disconnected duration.-- **Kubernetes Cluster Overview** will show as **Online** after 10 minutes.-- **Monitoring** tabs will show metrics for sites after 10 minutes, but won't populate stats for the disconnected duration.-- **Kubernetes Arc Insights** will show new stats after 10 minutes, but won't populate stats for the disconnected duration.-- **Resource Health** views will be viewable immediately.-- [Workbooks](../update-center/workbooks.md) for the ASE will be viewable immediately and will populate for the disconnected duration.
+While disconnected, you cannot enable local monitoring authentication or sign in to the [distributed tracing](distributed-tracing.md) and [packet core dashboards](packet-core-dashboards.md) using Azure Active Directory. However, you can access both distributed tracing and packet core dashboards via local access if enabled.
-## Disconnected mode for Azure Private 5G Core
+If you expect to need access to your local monitoring tools while the ASE device is disconnected, you can change your authentication method to local usernames and passwords by following [Modify the local access configuration in a site](modify-local-access-configuration.md).
-*Disconnected mode* allows for ASE disconnects of up to two days. During disconnected mode, AP5GC core functionality persists through ASE disconnects due to: network issues, network equipment resets and temporary network equipment separation. During disconnects, the ASE management GUI will display several banners alerting that it's currently disconnected and the impact on functions.
+Once the disconnect ends, log analytics on Azure updates with the stored data, excluding rate and gauge type metrics.
-### Functions not supported while in disconnected mode
+### Configuration and provisioning actions during temporary disconnects
-The following functions aren't supported while in disconnected mode:
+It's common to see temporary failures such as timeouts of configuration and provisioning while ASE is online but with a connectivity issue. AP5GC can handle such events by automatically retrying configuration and provisioning actions once ASE connectivity is restored. If ASE connectivity is not restored within 10 minutes, or ASE is detected as being offline, ongoing operations will fail and you will need to repeat the action manually once the ASE reconnects.
-- Deployment of the 5G core-- Updating the 5G core version-- Updating SIM configuration-- Updating NAT configuration-- Updating service policy-- Provisioning SIMs
+The **Sim overview** and **Sim Policy overview** blades display provisioning status of the resource in the site. This allows you to monitor the progress of provisioning actions. Additionally, the **Packet core control plane overview** displays the **Installation state** which can be used to monitor changes due to configuration actions.
-### Monitoring and troubleshooting during disconnects
+### ASE behavior after connectivity resumes
-While in disconnected mode, you won't be able to change the local monitoring authentication method or sign in to the [distributed tracing](distributed-tracing.md) and [packet core dashboards](packet-core-dashboards.md) using Azure Active Directory. If you expect to need access to your local monitoring tools while the ASE is disconnected, you can change your authentication method to local usernames and passwords by following [Modify the local access configuration in a site](modify-local-access-configuration.md).
+Once ASE connectivity resumes, several features resume:
-Once the disconnect ends, log analytics on Azure will update with the stored data, excluding rate and gauge type metrics.
+- ASE management resumes immediately.
+- **Resource Health** is viewable immediately.
+- **Workbooks** is viewable immediately and populates for the disconnected duration.
+- **Kubernetes Cluster Overview** shows as **Online** after 10 minutes.
+- **Monitoring** tabs show metrics for sites after 10 minutes but don't populate stats for the disconnected duration.
+- **Kubernetes Arc Insights** shows new stats after 10 minutes but doesn't populate stats for the disconnected duration.
+- **Resource Health** views are viewable immediately.
+- [Workbooks](../update-center/workbooks.md) for the ASE are viewable immediately and populate for the disconnected duration.
## Next steps
private-5g-core Azure Stack Edge Packet Core Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-packet-core-compatibility.md
Title: Packet core and Azure Stack Edge compatibility description: Discover which Azure Stack Edge versions are compatible with each packet core version--++ Previously updated : 12/16/2022 Last updated : 03/30/2023 # Packet core and Azure Stack Edge (ASE) compatibility
The following table provides information on which versions of the ASE device are
| Packet core version | Compatible ASE versions | |--|--|
+| 2303 | 2301 |
| 2302 | 2301 | | 2301 | 2210, 2301 | | 2211 | 2210 |
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
Note that once the SIM group is created, the encryption type cannot be changed.
- Manually entering values for each SIM into fields in the Azure portal. This option is best when provisioning a few SIMs. - Importing a JSON file containing values for one or more SIM resources. This option is best when provisioning a large number of SIMs. The file format required for this JSON file is given in [JSON file format for provisioning SIMs](#json-file-format-for-provisioning-sims). You'll need to use this option if you're deploying your private mobile network with an ARM template.
+ - Importing an encrypted JSON file, provided by your SIM vendor, containing values for one or more SIM resources. You'll need to use this option if you're deploying your private mobile network with a JSON file provided by one of our SIM vendor partners. See [Collecting information for provisioning SIM vendor provided SIMs](#collecting-information-for-provisioning-sim-vendor-provided-sims) rather than collecting the values the next step.
1. Collect each of the values given in the following table for each SIM resource you want to provision.
The following example shows the file format you'll need if you want to provision
] ```
+## Collecting information for provisioning SIM vendor provided SIMs
+
+Collect and edit each of the values in the following table for each SIM resource you want to provision using an encrypted JSON file, provided by your SIM vendor.
+
+|Value | JSON file parameter name |
+|||
+|The name for the SIM resource. The name must only contain alphanumeric characters, dashes, and underscores.|`name`|
+|The type of device that is using this SIM. This value is an optional, free-form string. You can use it as required to easily identify device types that are using the enterprise's mobile networks.|`deviceType`|
+|The SIM policy ID to apply to the SIM. See [Decide whether you want to use the default service and SIM policy](#decide-whether-you-want-to-use-the-default-service-and-sim-policy).|`simPolicy`|
+|The static IP configuration values for the SIM: **attachedDataNetwork**, **slice**, and **staticIp**.|`staticIpConfigurations`|
+
+### Encrypted JSON file format for provisioning vendor provided SIMs
+
+The following example shows the file format to provision your SIM resources using a SIM vendor provided encrypted JSON file.
+
+```json
+{ΓÇ»
+ΓÇ» "version": 1,ΓÇ»
+ΓÇ» "azureKey": 1,ΓÇ»
+ΓÇ» "vendorKeyFingerprint": "A5719BCEFD6A2021A11D7649942ECC14",
+ΓÇ» "encryptedTransportKey": "0EBAE5E2D31A1BE48495F6DCA65983EEAE6BA6B75A92040562EAD84079BF701CBD3BB1602DB74E85921184820B78A02EC709951195DC87E44481FDB6B826DF775E29B7073644EA66649A14B6CA6B0EE75DE8B4A8D0D5186319E37FBF165A691E607CFF8B65F3E5E9D448049704DE4EA047101ADA4554A543F405B447B8DB687C0B7624E62515445F3E887B3328AA555540D9959752C985490586EF06681501A89594E28F98BF66F179FE3F1D2EE13C69BC42C30A8D3DC6898B8160FC66CDDEE164760F27B68A07BA4C4AE5AFFEA45EE8342E1CA8470150ED6AF4215CEF173418E60E2B1DF4A8C2AE6F0C9A291F5D185ECAD0D94D48EFD06570A0C1AE27D5EC20",ΓÇ»
+ΓÇ» "signedTransportKey": "83515CC47C8890F62D4A0D16DE58C2F2DCFD827C317047693A46B9CA1F9EBC33CCDB8CABE04A275D65E180813CCFF43FC2DA95E19E2B9FF2588AE0914418DC9CB5506EB7AEADB272F5DAB9F0B1CCFAD62B95C91D4F4680A350F56D2A7F8EC863F4D61E1D7A38746AEE6C6391D619CCFCFA2B6D554671D91A26484CD6E120D84917FBF69D3B56B2AA8F2B36AF88492F1A7E267594B6C1596B81A81079540EC3F31869294BFEB225DFB171DE557B8C05D7C963E047E3AF36D1387FEDA28E55E411E5FB6AED178FB9C92D674D71AF8FEB6462F509E6423D4EBE0EC84E4135AA6C7A36F849A14A6A70E7188E08278D515BD95A549645E9D595D1DEC13E1A68B9CB67",ΓÇ»
+ΓÇ» "sims": [ΓÇ»
+    { 
+      "name": "SIM 1", 
+      "properties": { 
+        "deviceType": "Sensor", 
+       "integratedCircuitCardIdentifier": "8922345678901234567", 
+        "internationalMobileSubscriberIdentity": "001019990010002", 
+        "encryptedCredentials": "3ED205BE2DD7F0E467283EC55F9E8F3588B68DC98811BE671070C65EFDE0CCCAD18C8B663231C80FB478F753A6B09142D06982421261679B7BB112D36473EA7EF973DCF7F634124B58DD945FE61D4B16978438CB33E64D3AA58B5C38A0D97030B5F95B16E308D919EB932ACCD36CB8C2838C497B3B38A60E3DD385", 
+        "simPolicy": { 
+          "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/simPolicies/MySimPolicy" 
+        }, 
+        "staticIpConfiguration": [
+ {
+ "attachedDataNetwork": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/TestPacketCoreCP/packetCoreDataPlanes/TestPacketCoreDP/attachedDataNetworks/TestAttachedDataNetwork"
+ },
+ "slice": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/slices/testSlice"
+ },
+ "staticIp": {
+ "ipv4Address": "2.4.0.1"
+ }
+          } 
+        ] 
+      } 
+    } 
+ΓÇ» ]ΓÇ»
+}ΓÇ»
+```
+ ## Decide whether you want to use the default service and SIM policy Azure Private 5G Core offers a default service and SIM policy that allow all traffic in both directions for all the SIMs you provision. They're designed to allow you to quickly deploy a private mobile network and bring SIMs into service automatically, without the need to design your own policy control configuration.
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
Previously updated : 01/31/2023 Last updated : 03/30/2023
+zone_pivot_groups: ase-pro-version
# Commission the AKS cluster
Wait for the machine to reboot if necessary (approximately 5 minutes).
You now need to configure virtual switches and virtual networks on those switches. You'll use the **Advanced networking** section of the Azure Stack Edge local UI to do this task. You can input all the settings on this page before selecting **Apply** at the bottom to apply them all at once.- 1. Configure three virtual switches. There must be a virtual switch associated with each port before the next step. The virtual switches may already be present if you have other virtual network functions (VNFs) set up.
+ Select **Add virtual switch** and fill in the side panel appropriately for each switch before selecting **Modify** to save that configuration.
+ - Create a virtual switch on the port that should have compute enabled (the management port). We recommend using the format **vswitch-portX**, where **X** is the number of the port. For example, create **vswitch-port2** on port 2.
+ - Create a virtual switch on port 3 with the name **vswitch-port3**.
+ - Create a virtual switch on port 4 with the name **vswitch-port4**.
+ You should now see something similar to the following image:
+ :::image type="content" source="media/commission-cluster/commission-cluster-virtual-switch-ase-2.png" alt-text="Screenshot showing three virtual switches, where the names correspond to the network interface the switch is on. ":::
+
+1. Configure three virtual switches. There must be a virtual switch associated with each port before the next step. The virtual switches may already be present if you have other virtual network functions (VNFs) set up.
Select **Add virtual switch** and fill in the side panel appropriately for each switch before selecting **Modify** to save that configuration. - Create a virtual switch on the port that should have compute enabled (the management port). We recommend using the format **vswitch-portX**, where **X** is the number of the port. For example, create **vswitch-port3** on port 3. - Create a virtual switch on port 5 with the name **vswitch-port5**.
You can input all the settings on this page before selecting **Apply** at the bo
You should now see something similar to the following image: :::image type="content" source="media/commission-cluster/commission-cluster-virtual-switch.png" alt-text="Screenshot showing three virtual switches, where the names correspond to the network interface the switch is on. ":::
-
-1. Create virtual networks representing the following interfaces (which you allocated subnets and IP addresses for in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses)):
+2. Create virtual networks representing the following interfaces (which you allocated subnets and IP addresses for in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses)):
- Control plane access interface - User plane access interface - User plane data interface(s)
+ You can name these networks yourself, but the name **must** match what you configure in the Azure portal when deploying Azure Private 5G Core. For example, you can use the names **N2**, **N3** and **N6-DN1**, **N6-DN2**, **N6-DN3** (for a 5G deployment with multiple data networks (DNs); just **N6** for a single DN deployment). You can optionally configure each virtual network with a virtual local area network identifier (VLAN ID) to enable layer 2 traffic separation. The following example is for a 5G multi-DN deployment without VLANs.
+3. Carry out the following procedure three times, plus once for each of the supplementary data networks (so five times in total if you have three data networks):
+ 1. Select **Add virtual network** and fill in the side panel:
+ - **Virtual switch**: select **vswitch-port3** for N2 and N3, and select **vswitch-port4** for N6-DN1, N6-DN2, and N6-DN3.
+ - **Name**: *N2*, *N3*, *N6-DN1*, *N6-DN2*, or *N6-DN3*.
+ - **VLAN**: 0
+ - **Subnet mask** and **Gateway** must match the external values for the port.
+ - For example, *255.255.255.0* and *10.232.44.1*
+ - If there's no gateway between the access interface and gNB/RAN, use the gNB/RAN IP address as the gateway address. If there's more than one gNB connected via a switch, choose one of the IP addresses for the gateway.
+ 1. Select **Modify** to save the configuration for this virtual network.
+ 1. Select **Apply** at the bottom of the page and wait for the notification (a bell icon) to confirm that the settings have been applied. Applying the settings will take approximately 15 minutes.
+ The page should now look like the following image:
- You can name these networks yourself, but the name **must** match what you configure in the Azure portal when deploying Azure Private 5G Core. For example, you can use the names **N2**, **N3** and **N6-DN1**, **N6-DN2**, **N6-DN3** (for a 5G deployment with multiple data networks (DNs); just **N6** for a single DN deployment). The following example is for a 5G multi-DN deployment.
-
-1. Carry out the following procedure three times, plus once for each of the supplementary data networks (so five times in total if you have three data networks):
-
+ :::image type="content" source="media/commission-cluster/commission-cluster-advanced-networking-ase-2.png" alt-text="Screenshot showing Advanced networking, with a table of virtual switch information and a table of virtual network information.":::
+3. Carry out the following procedure three times, plus once for each of the supplementary data networks (so five times in total if you have three data networks):
1. Select **Add virtual network** and fill in the side panel: - **Virtual switch**: select **vswitch-port5** for N2 and N3, and select **vswitch-port6** for N6-DN1, N6-DN2, and N6-DN3. - **Name**: *N2*, *N3*, *N6-DN1*, *N6-DN2*, or *N6-DN3*.
- - **VLAN**: 0
+ - **VLAN**: VLAN ID, or 0 if not using VLANs
- **Subnet mask** and **Gateway** must match the external values for the port. - For example, *255.255.255.0* and *10.232.44.1* - If there's no gateway between the access interface and gNB/RAN, use the gNB/RAN IP address as the gateway address. If there's more than one gNB connected via a switch, choose one of the IP addresses for the gateway. 1. Select **Modify** to save the configuration for this virtual network. 1. Select **Apply** at the bottom of the page and wait for the notification (a bell icon) to confirm that the settings have been applied. Applying the settings will take approximately 15 minutes.
-
The page should now look like the following image: :::image type="content" source="media/commission-cluster/commission-cluster-advanced-networking.png" alt-text="Screenshot showing Advanced networking, with a table of virtual switch information and a table of virtual network information."::: ## Add compute and IP addresses
In the local Azure Stack Edge UI, go to the **Kubernetes (Preview)** page. You'l
The page should now look like the following image: :::image type="content" source="media/commission-cluster/commission-cluster-kubernetes-preview-enabled.png" alt-text="Screenshot showing Kubernetes (Preview) with two tables. The first table is called Compute virtual switch and the second is called Virtual network. A green tick shows that the virtual networks are enabled for Kubernetes.":::- ## Start the cluster and set up Arc Access the Azure portal and go to the **Azure Stack Edge** resource created in the Azure portal.
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
Title: Prepare to deploy a private mobile network description: Learn how to complete the prerequisite tasks for deploying a private mobile network with Azure Private 5G Core.--++ Previously updated : 01/31/2023 Last updated : 03/30/2023
+zone_pivot_groups: ase-pro-version
# Complete the prerequisite tasks for deploying a private mobile network
In this how-to guide, you'll carry out each of the tasks you need to complete before you can deploy a private mobile network using Azure Private 5G Core. > [!TIP]
-> [Private mobile network design requirements](private-mobile-network-design-requirements.md) contains the full network design requirements for a customised network.
+> [Private mobile network design requirements](private-mobile-network-design-requirements.md) contains the full network design requirements for a customized network.
## Get access to Azure Private 5G Core for your Azure subscription
Depending on your networking requirements (for example, if a limited set of subn
### Management network -- Network address in Classless Inter-Domain Routing (CIDR) notation. +
+- Network address in Classless Inter-Domain Routing (CIDR) notation.
+- Default gateway.
+- One IP address for the management port (port 2) on the Azure Stack Edge Pro 2 device.
+- Six sequential IP addresses for the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster nodes.
+- One IP address for accessing local monitoring tools for the packet core instance.
++
+- Network address in Classless Inter-Domain Routing (CIDR) notation.
- Default gateway.-- One IP address for the Azure Stack Edge Pro device's management port. You'll choose a port between 2 and 4 to use as the management port as part of [setting up your Azure Stack Edge Pro device](#order-and-set-up-your-azure-stack-edge-pro-devices).*
+- One IP address for the management port
+ - You'll choose a port between 2 and 4 to use as the Azure Stack Edge Pro GPU device's management port as part of [setting up your Azure Stack Edge Pro device](#order-and-set-up-your-azure-stack-edge-pro-devices).*
- Six sequential IP addresses for the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster nodes. - One IP address for accessing local monitoring tools for the packet core instance. + ### Access network +
+- Network address in CIDR notation.
+- Default gateway.
+- One IP address for the control plane interface. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface.*
+- One IP address for the user plane interface. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface.*
+- One IP address for port 3 on the Azure Stack Edge Pro 2 device.
+++ - Network address in CIDR notation. - Default gateway. - One IP address for the control plane interface. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface.* - One IP address for the user plane interface. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface.*-- One IP address for port 5 on the Azure Stack Edge Pro device.
+- One IP address for port 5 on the Azure Stack Edge Pro GPU device.
+ ### Data networks
Allocate the following IP addresses for each data network in the site:
- Default gateway. - One IP address for the user plane interface. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface.* +
+The following IP address must be shared by all the data networks in the site:
+
+- One IP address for port 4 on the Azure Stack Edge Pro 2 device.
+++ The following IP address must be shared by all the data networks in the site: -- One IP address for port 6 on the Azure Stack Edge Pro device.
+- One IP address for port 6 on the Azure Stack Edge Pro GPU device.
++
+### VLANs
+
+You can optionally configure your Azure Stack Edge Pro device with virtual local area network (VLAN) tags. You can use this to enable layer 2 traffic separation on the N2, N3 and N6 interfaces, or their 4G equivalents. For example, you might want to separate N2 and N3 traffic (which share a port on the ASE device) or separate traffic for each connected data network.
+
+Allocate VLAN IDs for each network as required.
## Allocate user equipment (UE) IP address pools Azure Private 5G Core supports the following IP address allocation methods for UEs. -- Dynamic. Dynamic IP address allocation automatically assigns a new IP address to a UE each time it connects to the private mobile network.
+- Dynamic. Dynamic IP address allocation automatically assigns a new IP address to a UE each time it connects to the private mobile network.
- Static. Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. This is useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you may configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you won't need to reconfigure the video analysis application with new IP addresses each time the cameras restart. You'll allocate static IP addresses to a UE as part of [provisioning its SIM](provision-sims-azure-portal.md).
DNS allows the translation between human-readable domain names and their associa
## Prepare your networks
-For each site you're deploying, do the following.
--- Ensure you have at least one network switch with at least three ports available. You'll connect each Azure Stack Edge Pro device to the switch(es) in the same site as part of the instructions in [Order and set up your Azure Stack Edge Pro device(s)](#order-and-set-up-your-azure-stack-edge-pro-devices).-- For every network where you decided not to enable NAPT (as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools)), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated to the packet core instance's user plane interface on the data network.
+For each site you're deploying, do the following.
+ - Ensure you have at least one network switch with at least three ports available. You'll connect each Azure Stack Edge Pro device to the switch(es) in the same site as part of the instructions in [Order and set up your Azure Stack Edge Pro device(s)](#order-and-set-up-your-azure-stack-edge-pro-devices).
+ - For every network where you decided not to enable NAPT (as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools)), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated to the packet core instance's user plane interface on the data network.
### Configure ports for local access +
+The following table contains the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling.
+
+You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](/azure/databox-online/azure-stack-edge-pro-2-system-requirements#networking-port-requirements).
+
+| Port | ASE interface | Description|
+|--|--|--|
+| TCP 443 Inbound | Management (LAN) | Access to local monitoring tools (packet core dashboards and distributed tracing). |
+| SCTP 38412 Inbound | Port 3 (Access network) | Control plane access signaling (N2 interface). </br>Only required for 5G deployments. |
+| SCTP 36412 Inbound | Port 3 (Access network) | Control plane access signaling (S1-MME interface). </br>Only required for 4G deployments. |
+| UDP 2152 In/Outbound | Port 3 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G). |
+| All IP traffic | Port 4 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). |
+++ The following table contains the ports you need to open for Azure Private 5G Core local access. This includes local management access and control plane signaling. You must set these up in addition to the [ports required for Azure Stack Edge (ASE)](../databox-online/azure-stack-edge-gpu-system-requirements.md#networking-port-requirements).
You must set these up in addition to the [ports required for Azure Stack Edge (A
| UDP 2152 In/Outbound | Port 5 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G). | | All IP traffic | Port 6 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). | + ### Outbound firewall ports required Review and apply the firewall recommendations for the following
To use Azure Private 5G Core, you need to register some additional resource prov
> [!TIP] > If you do not have the Azure CLI installed, see installation instructions at [How to install the Azure CLI](/cli/azure/install-azure-cli). Alternatively, you can use the [Azure Cloud Shell](../cloud-shell/overview.md) on the portal.
-
+ 1. Sign into the Azure CLI with a user account that is associated with the Azure tenant that you are deploying Azure Private 5G Core into:+ ```azurecli az login ```+ > [!TIP]
- > See [Sign in interactively](/cli/azure/authenticate-azure-cli) for full instructions.
+ > See [Sign in interactively](/cli/azure/authenticate-azure-cli) for full instructions.
1. If your account has multiple subscriptions, make sure you are in the correct one:+ ```azurecli az account set ΓÇô-subscription <subscription_id> ```+ 1. Check the Azure CLI version:+ ```azurecli az version ```+ If the CLI version is below 2.37.0, you will need to upgrade your Azure CLI to a newer version. See [How to update the Azure CLI](/cli/azure/update-azure-cli). 1. Register the following resource providers:+ ```azurecli az provider register --namespace Microsoft.MobileNetwork az provider register --namespace Microsoft.HybridNetwork
To use Azure Private 5G Core, you need to register some additional resource prov
az provider register --namespace Microsoft.Kubernetes az provider register --namespace Microsoft.KubernetesConfiguration ```+ 1. Register the following features:+ ```azurecli az feature register --name allowVnfCustomer --namespace Microsoft.HybridNetwork az feature register --name previewAccess --namespace Microsoft.Kubernetes
This command queries the custom location and will output an OID string. Save th
Do the following for each site you want to add to your private mobile network. Detailed instructions for how to carry out each step are included in the **Detailed instructions** column where applicable. +
+| Step No. | Description | Detailed instructions |
+|--|--|--|
+| 1. | Complete the Azure Stack Edge Pro 2 deployment checklist.| [Deployment checklist for your Azure Stack Edge Pro 2 device](/azure/databox-online/azure-stack-edge-pro-2-deploy-checklist?pivots=single-node)|
+| 2. | Order and prepare your Azure Stack Edge Pro 2 device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-prep.md) |
+| 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 2 - management</br>- Port 3 - access network</br>- Port 4 - data networks| [Tutorial: Install Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-install?pivots=single-node.md) |
+| 4. | Connect to your Azure Stack Edge Pro 2 device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-connect?pivots=single-node.md) |
+| 5. | Configure the network for your Azure Stack Edge Pro 2 device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs.| [Tutorial: Configure network for Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy?pivots=single-node.md)|
+| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md) |
+| 7. | Configure certificates and configure encryption-at-rest for your Azure Stack Edge Pro 2 device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates) |
+| 8. | Activate your Azure Stack Edge Pro 2 device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-activate.md) |
+| 9. | Configure compute on your Azure Stack Edge Pro 2 device. | [Tutorial: Configure compute on Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md) |
+| 10. | Enable VM management from the Azure portal. </br></br>Enabling this immediately after activating the Azure Stack Edge Pro 2 device occasionally causes an error. Wait one minute and retry. | Navigate to the ASE resource in the Azure portal, go to **Edge services**, select **Virtual machines** and select **Enable**. |
+| 11. | Run the diagnostics tests for the Azure Stack Edge Pro 2 device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 2 - management</br>- Port 3 - access network</br>- Port 4 - data networks</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
+
+> [!IMPORTANT]
+> You must ensure your Azure Stack Edge Pro 2 device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro 2 device, see [Update your Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later).
+++ | Step No. | Description | Detailed instructions | |--|--|--|
-| 1. | Complete the Azure Stack Edge Pro deployment checklist.| [Deployment checklist for your Azure Stack Edge Pro GPU device](../databox-online/azure-stack-edge-gpu-deploy-checklist.md?pivots=single-node.md)|
-| 2. | Order and prepare your Azure Stack Edge Pro device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md) |
-| 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data networks</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-install.md?pivots=single-node.md) |
-| 4. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-connect.md?pivots=single-node.md) |
-| 5. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs.| [Tutorial: Configure network for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md?pivots=single-node.md)|
+| 1. | Complete the Azure Stack Edge Pro GPU deployment checklist.| [Deployment checklist for your Azure Stack Edge Pro GPU device](/azure/databox-online/azure-stack-edge-gpu-deploy-checklist?pivots=single-node.md)|
+| 2. | Order and prepare your Azure Stack Edge Pro GPU device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md) |
+| 3. | Rack and cable your Azure Stack Edge Pro device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network</br>- Port 6 - data networks</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-install?pivots=single-node.md) |
+| 4. | Connect to your Azure Stack Edge Pro device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-connect?pivots=single-node.md) |
+| 5. | Configure the network for your Azure Stack Edge Pro device. When carrying out the *Enable compute network* step of this procedure, ensure you use the port you've connected to your management network. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs.</br></br> In addition, you can configure your Azure Stack Edge Pro device to run behind a web proxy. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md) </br></br> [(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)|
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
-| 7. | Configure certificates for your Azure Stack Edge Pro device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md?pivots=single-node.md) |
-| 8. | Activate your Azure Stack Edge Pro device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
+| 7. | Configure certificates for your Azure Stack Edge Pro GPU device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates?pivots=single-node.md) |
+| 8. | Activate your Azure Stack Edge Pro GPU device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) |
| 9. | Enable VM management from the Azure portal. </br></br>Enabling this immediately after activating the Azure Stack Edge Pro device occasionally causes an error. Wait one minute and retry. | Navigate to the ASE resource in the Azure portal, go to **Edge services**, select **Virtual machines** and select **Enable**. |
-| 10. | Run the diagnostics tests for the Azure Stack Edge Pro device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
+| 10. | Run the diagnostics tests for the Azure Stack Edge Pro GPU device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
> [!IMPORTANT]
-> You must ensure your Azure Stack Edge Pro device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro device, see [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later).
+> You must ensure your Azure Stack Edge Pro GPU device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro device, see [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later).
+ ## Next steps
-You can now commission the Azure Kubernetes Service (AKS) cluster on your Azure Stack Edge Pro device to get it ready to deploy Azure Private 5G Core.
+You can now commission the Azure Kubernetes Service (AKS) cluster on your Azure Stack Edge Pro 2 or Azure Stack Edge Pro GPU device to get it ready to deploy Azure Private 5G Core.
- [Commission an AKS cluster](commission-cluster.md)
private-5g-core How To Guide Deploy A Private Mobile Network Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/how-to-guide-deploy-a-private-mobile-network-azure-portal.md
Private mobile networks provide high performance, low latency, and secure connec
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor or Owner role at the subscription scope. - Collect all of the information listed in [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md). You may also need to take the following steps based on the decisions you made when collecting this information.
- - If you decided you wanted to provision SIMs using a JSON file, ensure you've prepared this file and made it available on the machine you'll use to access the Azure portal. For more information on the file format, see [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims).
+ - If you decided you want to provision SIMs using a JSON file, ensure you've prepared this file and made it available on the machine you'll use to access the Azure portal. For more information on the file format, see [JSON file format for provisioning SIMs](collect-required-information-for-private-mobile-network.md#json-file-format-for-provisioning-sims) or [Encrypted JSON file format for provisioning vendor provided SIMs](collect-required-information-for-private-mobile-network.md#encrypted-json-file-format-for-provisioning-vendor-provided-sims).
- If you decided you want to use the default service and SIM policy, identify the name of the data network to which you want to assign the policy. ## Deploy your private mobile network
In this step, you'll create the Mobile Network resource representing your privat
:::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
+ - If you select **Upload Encrypted JSON file**, the following notice will appear.
+
+ :::image type="content" source="media/how-to-guide-deploy-a-private-mobile-network-azure-portal/create-private-mobile-network-vendor-sims-notice.png" alt-text="Screenshot of the Azure portal showing a notice on the SIMs configuration tab stating: At the moment, you will not be able to upload the encrypted SIMs under this SIM group. However, you will be able upload the encrypted SIMs under the SIM group section, once the above named SIM group gets created.":::
+ 1. If you're provisioning SIMs at this point, you'll need to take the following additional steps. 1. If you want to use the default service and SIM policy, set **Do you wish to create a basic, default SIM policy and assign it to these SIMs?** to **Yes**, and then enter the name of the data network into the **Data network name** field that appears. 1. Under **Enter SIM group information**, set **SIM group name** to your chosen name for the SIM group to which your SIMs will be added.
private-5g-core Packet Core Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/packet-core-dashboards.md
You can access the following packet core dashboards:
:::image type="content" source="media/packet-core-dashboards/packet-core-4g-interfaces-dashboard.png" alt-text="Screenshot of the 4G Interfaces dashboard. Panels related to activity on the packet core instance's 4G interfaces are shown." lightbox="media/packet-core-dashboards/packet-core-4g-interfaces-dashboard.png":::
+#### Filter by data network
+
+Some packet core dashboards can be filtered to show statistics for specific data networks on certain panels.
+
+Where supported, at the top left of the dashboard, a **Data Network** dropdown displays all the data networks for the deployment. Selecting one or more checkboxes next to the data network names applies a filter to the panels that support it. By default all data networks are displayed.
++ ## Panels and rows Each dashboard contains **panels** and **rows**.
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
Title: Private mobile network design requirements description: Learn how to design a private mobile network for Azure Private 5G Core.--++ Previously updated : 10/25/2022 Last updated : 03/30/2023 # Private mobile network design requirements
-This article helps you design and prepare for implementing a private 4G or 5G network based on the Azure Private 5G technology. It aims to provide an understanding of how these networks are constructed and the decisions that you need to make as you plan your network.
+This article helps you design and prepare for implementing a private 4G or 5G network based on Azure Private 5G Core (AP5GC). It aims to provide an understanding of how these networks are constructed and the decisions that you need to make as you plan your network.
## Azure Private MEC and Azure Private 5G Core
-[Azure private multi-access edge compute (MEC)](../private-multi-access-edge-compute-mec/overview.md) is a solution that combines Microsoft compute, networking, and application services onto a deployment at the enterprise premises (edge). These deployments are managed centrally from the cloud. Azure Private 5G Core is an Azure service within Azure private MEC that provides 4G and 5G core network functions at the enterprise edge. At the enterprise edge site, devices attach across a cellular radio access network (RAN) and are connected via the Azure Private 5G Core service to upstream networks, applications, and resources. Optionally, devices may use the local compute capability provided by Azure private MEC to process data streams at very low latency, all under the control of the enterprise.
+[Azure private multi-access edge compute (MEC)](../private-multi-access-edge-compute-mec/overview.md) is a solution that combines Microsoft compute, networking, and application services into a deployment at enterprise premises (the edge). These edge deployments are managed centrally from the cloud. Azure Private 5G Core is an Azure service within Azure Private Multi-access Edge Compute (MEC) that provides 4G and 5G core network functions at the enterprise edge. At the enterprise edge site, devices attach across a cellular radio access network (RAN) and are connected via the Azure Private 5G Core service to upstream networks, applications, and resources. Optionally, devices may use the local compute capability provided by Azure Private MEC to process data streams at very low latency, all under the control of the enterprise.
:::image type="content" source="media/private-5g-elements.png" alt-text="Diagram displaying the components of a private network solution. UEs, RANs and sites are at the edge, while Azure region management is in the cloud.":::
The following capabilities must be present to allow user equipment (UEs) to atta
- The UE must be compatible with the protocol and the wireless spectrum band used by the radio access network (RAN). - The UE must contain a subscriber identity module (SIM). The SIM is a cryptographic element that stores the identity of the device. - There must be a RAN, sending and receiving the cellular signal, to all parts of the enterprise site that contain UEs needing service.-- A packet core instance connected to the RAN and to an upstream network is required. The packet core is responsible for authenticating the UE's SIMs as they connect across the RAN and request service from the network. It applies policy to the resulting data flows to and from the UEs, for example, to set a quality of service.
+- There must be a packet core instance connected to the RAN and to an upstream network. The packet core is responsible for authenticating the UE's SIMs as they connect across the RAN and request service from the network. It applies policy to the resulting data flows to and from the UEs; for example, to set a quality of service.
- The RAN, packet core, and upstream network infrastructure must be connected via Ethernet so that they can pass IP traffic to one another. ## Designing a private mobile network
The following sections describe elements of the network you need to consider and
### Topology
-Designing and implementing your local network is a foundational part of your Edge AP5GC deployment. You need to make networking design decisions to properly support your Private 5G Core and any Edge workloads.
-In this section, we outline some decisions you should consider when designing your network and provide some sample network topologies. The following diagram shows a standard network topology.
+Designing and implementing your local network is a foundational part of your AP5GC deployment. You need to make networking design decisions to support your AP5GC packet core and any other edge workloads.
+This section outlines some decisions you should consider when designing your network and provides some sample network topologies. The following diagram shows a basic network topology.
#### Design considerations When deployed on Azure Stack Edge (ASE), AP5GC uses physical port 5 for access signaling and data (5G N2 and N3 reference points/4G S1 and S1-U reference points) and port 6 for core data (5G N6/4G SGi reference points).
-Azure Private 5G Core Packet Core supports deployments with or without L3 routers on ports 5 and 6. This is useful for avoiding extra hardware at small Edge sites.
+AP5GC supports deployments with or without layer 3 routers on ports 5 and 6. This is useful for avoiding extra hardware at smaller edge sites.
-- It is possible to connect ASE port 5 to RAN nodes directly (back-to-back) or via an L2 switch. When using this topology, it is required to configure the eNodeB/gNodeB address as the default gateway in the ASE network interface configuration.-- Similarly, it is possible to connect ASE port 6 to your core network via an L2 switch. When using this topology, it is required to set up an application or an arbitrary address on the subnet as gateway on the ASE side. -- Alternatively, you can combine these approaches. For example: using a router on ASE port 6 with a flat L2 network on ASE port 5. If a L3 router is present in local network topology, you must set it - Alternatively, you can combine these approaches. For example: using a router on ASE port 6 with a flat L2 network on ASE port 5. If a L3 router is present in local network topology, you must set it as the gateway in ASE configuration.
+- It is possible to connect ASE port 5 to RAN nodes directly (back-to-back) or via a layer 2 switch. When using this topology, you must configure the eNodeB/gNodeB address as the default gateway on the ASE network interface.
+- Similarly, it is possible to connect ASE port 6 to your core network via a layer 2 switch. When using this topology, you must set up an application or an arbitrary address on the subnet as gateway on the ASE side.
+- Alternatively, you can combine these approaches. For example, you could use a router on ASE port 6 with a flat layer 2 network on ASE port 5. If a layer 3 router is present in the local network, you must configure it to match the ASE's configuration.
-Unless your AP5GC Packet core is using NAT, there must be a L3 router configured with static routes to the UE IP pools via the appropriate N6 IP address for the corresponding Attached Data Network.
+Unless your packet core has Network Address Translation (NAT) enabled, a local layer 3 network device must be configured with static routes to the UE IP pools via the appropriate N6 IP address for the corresponding attached data network.
#### Sample network topologies
-There are multiple ways to set up your network for use with AP5GC packet core. The exact setup varies depending on your own needs and hardware. This section provides some sample network topologies.
+There are multiple ways to set up your network for use with AP5GC. The exact setup varies depending on your needs and hardware. This section provides some sample network topologies on ASE Pro GPU hardware.
- Layer 3 network with N6 Network Address Translation (NAT)
- This network topology has your ASE connected to a layer 2 device that provides connectivity to the mobile network core and access gateways (routers connecting your ASE to your data and access networks respectively). This solution is commonly used as it supports L3 routing when required.
+ This network topology has your ASE connected to a layer 2 device that provides connectivity to the mobile network core and access gateways (routers connecting your ASE to your data and access networks respectively). This solution is commonly used because it supports layer 3 routing when required.
:::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-n6-nat.png" alt-text="Diagram of a layer 3 network with N6 Network Address Translation (N A T)." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-n6-nat.png"::: - Layer 3 network without Network Address Translation (NAT)
- This network topology is a similar solution to Layer 3 network with N6 Network Address Translation (NAT). UE IP address ranges must be configured as static routing in the DN router with the N6 NAT IP address as the next hop address.
+ This network topology is a similar solution, but UE IP address ranges must be configured as static routes in the data network router with the N6 NAT IP address as the next hop address.
:::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-without-n6-nat.png" alt-text="Diagram of a layer 3 network without Network Address Translation (N A T)." lightbox="media/private-mobile-network-design-requirements/layer-3-network-without-n6-nat.png"::: - Flat layer 2 network
- The packet core does not require L3 routers or any router-like functionality. An alternative topology could forgo the use of L3 gateway routers entirely and instead construct a layer 2 network in which the ASE is in the same subnet as your data and access networks. This network topology can be a cheaper alternative when you donΓÇÖt require L3 routing. This solution can be used for networks where NAPT is enabled.
+ The packet core does not require layer 3 routers or any router-like functionality. An alternative topology could forgo the use of layer 3 gateway routers entirely and instead construct a layer 2 network in which the ASE is in the same subnet as the data and access networks. This network topology can be a cheaper alternative when you donΓÇÖt require layer 3 routing. This requires Network Address Port Translation (NAPT) to be enabled on the packet core.
:::image type="content" source="media/private-mobile-network-design-requirements/layer-2-network.png" alt-text="Diagram of a layer 2 network." lightbox="media/private-mobile-network-design-requirements/layer-2-network.png"::: -- Layer 3 network with multiple DNs
- - Packet Core can support multiple Attached Data Networks, each with its own configuration for DNS, UE IP address pools, N6 IP configuration, and NAT. The operator can provision UEs as subscribed in one or more DN and apply DN-specific policy and QoS.
- - This topology requires that the N6 is split into subnets per-DN or that all DNs exist in the same subnet. Due to this, this topology requires careful planning and configuration to prevent overlapping DN IP ranges or UE IP ranges that result in routing problems.
- :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-multiple-dns.png" alt-text="Diagram of L3 network topology with multiple D N s." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-multiple-dns.png":::
-
+- Layer 3 network with multiple data networks
+ - AP5GC can support multiple attached data networks, each with its own configuration for Domain Name System (DNS), UE IP address pools, N6 IP configuration, and NAT. The operator can provision UEs as subscribed in one or more data networks and apply data network-specific policy and quality of service (QoS) configuration.
+ - This topology requires that the N6 interface is split into one subnet for each data network or one subnet for all data networks. This option therefore requires careful planning and configuration to prevent overlapping data network IP ranges or UE IP ranges.
+ :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-multiple-dns.png" alt-text="Diagram of layer 3 network topology with multiple data networks." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-multiple-dns.png":::
+
+- Layer 3 network with VLAN separation
+ - You can also separate ASE traffic into VLANs, whether or not you choose to add layer 3 gateways to your network. There are multiple benefits to segmenting traffic into separate VLANs, including more flexible network management and increased security.
+ - For example, you could configure separate VLANs for management, access and data traffic, or a separate VLAN for each attached data network.
+ - VLANs must be configured on the local layer 2 or layer 3 network equipment. Multiple VLANs will be carried on a single link from ASE port 5 (access network) and/or 6 (core network), so you must configure each of those links as a VLAN trunk.
+ :::image type="content" source="media/private-mobile-network-design-requirements/layer-3-network-with-vlans.png" alt-text="Diagram of layer 3 network topology with V L A N s." lightbox="media/private-mobile-network-design-requirements/layer-3-network-with-vlans.png":::
+
### Subnets and IP addresses You may have existing IP networks at the enterprise site that the private cellular network will have to integrate with. This might mean, for example: -- Selecting IP subnets and IP addresses for the Azure Private 5G Core that match existing subnets without clashing addresses.-- Segregating the new network via IP routers or using the private RFC1918 address space for subnets.-- Assigning a special pool of IP addresses specifically for use by UEs when they attach to the network.-- Using network address and port translation (NAPT), either on the packet core itself, or on an upstream network device such as a border router.
+- Selecting IP subnets and IP addresses for AP5GC that match existing subnets without clashing addresses.
+- Segregating the new network via IP routers or using the private RFC 1918 address space for subnets.
+- Assigning a pool of IP addresses specifically for use by UEs when they attach to the network.
+- Using Network Address Port Translation (NAPT), either on the packet core itself or on an upstream network device such as a border router.
- Optimizing the network for performance by choosing a maximum transmission unit (MTU) that minimizes fragmentation.
-You need to document the IPv4 subnets that will be used for the deployment and agree on the IP addresses to use for each element in the solution, and as on the IP addresses that will be allocated to UEs when they attach. You need to deploy (or configure existing) routers and firewalls at the enterprise site to permit traffic. You should also agree how and where in the network any NAPT or MTU changes are required and plan the associated router/firewall configuration. For more information, see [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
+You need to document the IPv4 subnets that will be used for the deployment and agree on the IP addresses to use for each element in the solution, and on the IP addresses that will be allocated to UEs when they attach. You need to deploy (or configure existing) routers and firewalls at the enterprise site to permit traffic. You should also agree how and where in the network any NAPT or MTU changes are required and plan the associated router/firewall configuration. For more information, see [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md).
### Network access
-Your design must reflect the enterpriseΓÇÖs rules on what networks and assets should be reachable by the RAN and UEs on the private 5G network. For example, they might be permitted to access local Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), the internet, or Azure, but not a factory operations local area network (LAN). You may need to arrange for remote access to the network so that you can troubleshoot issues without requiring a site visit. You also need to consider how the enterprise site will be connected to upstream networks such as Azure, for management and/or for access to other resources and applications outside of the enterprise site.
+Your design must reflect the enterpriseΓÇÖs rules on what networks and assets should be reachable by the RAN and UEs on the private 5G network. For example, they might be permitted to access local Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), the internet, or Azure, but not a factory operations local area network (LAN). You may need to arrange for remote access to the network so that you can troubleshoot issues without requiring a site visit. You also need to consider how the enterprise site will be connected to upstream networks such as Azure for management and/or for access to other resources and applications outside of the enterprise site.
You need to agree with the enterprise team which IP subnets and addresses will be allowed to communicate with each other. Then, create a routing plan and/or access control list (ACL) configuration that implements this agreement on the local IP infrastructure. You may also use virtual local area networks (VLANs) to partition elements at layer 2, configuring your switch fabric to assign connected ports to specific VLANs (for example, to put the Azure Stack Edge port used for RAN access into the same VLAN as the RAN units connected to the Ethernet switch). You should also agree with the enterprise to set up an access mechanism, such as a virtual private network (VPN), that allows your support personnel to remotely connect to the management interface of each element in the solution. You also need an IP link between Azure Private 5G Core and Azure for management and telemetry.
The RAN that you use to broadcast the signal across the enterprise site must com
- You have received permission for the RAN to broadcast using spectrum in a certain location, for example, by grant from a telecom operator, regulatory authority or via a technological solution such as a Spectrum Access System (SAS). - The RAN units in a site have access to high-precision timing sources, such as Precision Time Protocol (PTP) and GPS location services.
-You should ask your RAN partner for the countries/regions and frequency bands for which the RAN is approved. You may find that you need to use multiple RAN partners to cover the countries/regions in which you provide your solution. Although the RAN, UE and packet core all communicate using standard protocols, Microsoft recommends that you perform interoperability testing for the specific 4G Long-Term Evolution (LTE) or 5G standalone (SA) protocol between Azure Private 5G Core, UEs and the RAN prior to any deployment at an enterprise customer.
+You should ask your RAN partner for the countries/regions and frequency bands for which the RAN is approved. You may find that you need to use multiple RAN partners to cover the countries and regions in which you provide your solution. Although the RAN, UE and packet core all communicate using standard protocols, we recommend that you perform interoperability testing for the specific 4G Long-Term Evolution (LTE) or 5G standalone (SA) protocol between Azure Private 5G Core, UEs and the RAN prior to any deployment at an enterprise customer.
Your RAN will transmit a Public Land Mobile Network Identity (PLMN ID) to all UEs on the frequency band it is configured to use. You should define the PLMN ID and confirm your access to spectrum. In some countries, spectrum must be obtained from the national regulator or incumbent telecommunications operator. For example, if you're using the band 48 Citizens Broadband Radio Service (CBRS) spectrum, you may need to work with your RAN partner to deploy a Spectrum Access System (SAS) domain proxy on the enterprise site so that the RAN can continuously check that it is authorized to broadcast.
The Maximum Transmission Unit (MTU) is a property of an IP link, and it is confi
To avoid transmission issues caused by IPv4 fragmentation, a 4G or 5G packet core instructs UEs what MTU they should use. However, UEs do not always respect the MTU signaled by the packet core.
-IP packets from UEs are tunneled through from the RAN, which adds overhead from encapsulation. Due to this, the MTU value for the UE should be smaller than the MTU value used between the RAN and the Packet Core to avoid transmission issues.
+IP packets from UEs are tunneled through from the RAN, which adds overhead from encapsulation. The MTU value for the UE should therefore be smaller than the MTU value used between the RAN and the packet core to avoid transmission issues.
-RANs typically come pre-configured with an MTU of 1500. The Packet CoreΓÇÖs default UE MTU is 1300 bytes to allow for encapsulation overhead. These values maximize RAN interoperability, but risk that certain UEs will not observe the default MTU and will generate larger packets that require IPv4 fragmentation that may be dropped by the network.
+RANs typically come pre-configured with an MTU of 1500. The packet coreΓÇÖs default UE MTU is 1300 bytes to allow for encapsulation overhead. These values maximize RAN interoperability, but risk that certain UEs will not observe the default MTU and will generate larger packets that require IPv4 fragmentation and that may be dropped by the network.
-If you are affected by this issue, it is strongly recommended to configure the RAN to use an MTU of 1560 or higher which allows a sufficient overhead for the encapsulation and avoids fragmentation with a UE using a standard MTU of 1500.
+If you are affected by this issue, it is strongly recommended to configure the RAN to use an MTU of 1560 or higher, which allows a sufficient overhead for the encapsulation and avoids fragmentation with a UE using a standard MTU of 1500.
### Signal coverage
You should perform a site survey with your RAN partner and the enterprise to mak
### SIMs
-Every UE must present an identity to the network, encoded in a subscriber identity module (SIM). SIMs are available in different physical form factors and in software-only format (eSIM). The data encoded on the SIM must match the configuration of the RAN and of the provisioned identity data in the Azure Private 5G Core.
+Every UE must present an identity to the network, encoded in a subscriber identity module (SIM). SIMs are available in different physical form factors and in software-only format (eSIM). The data encoded on the SIM must match the configuration of the RAN and of the provisioned identity data in Azure Private 5G Core.
-Obtain SIMs in factors compatible with the UEs and programmed with the PLMN ID and keys that you want to use for the deployment. Physical SIMs are widely available on the open market at relatively low cost. If you prefer to use eSIMs, you need to deploy the necessary eSIM configuration and provisioning infrastructure so that UEs can configure themselves before they attach to the cellular network. You can use the provisioning data you receive from your SIM partner to provision matching entries in Azure Private 5G Core. Because SIM data must be kept secure, the cryptographic keys used to provision SIMs are not readable in Azure Private 5G Core once set, so you must consider how you store them in case you ever need to reprovision the data in Azure Private 5G Core.
+Obtain SIMs in factors compatible with the UEs and programmed with the PLMN ID and keys that you want to use for the deployment. Physical SIMs are widely available on the open market at relatively low cost. If you prefer to use eSIMs, you need to deploy the necessary eSIM configuration and provisioning infrastructure so that UEs can configure themselves before they attach to the cellular network. You can use the provisioning data you receive from your SIM partner to provision matching entries in Azure Private 5G Core. Because SIM data must be kept secure, the cryptographic keys used to provision SIMs are not readable once set, so you must consider how you store them in case you ever need to reprovision the data in Azure Private 5G Core.
### Automation and integration
-Being able to build enterprise networks using automation and other programmatic techniques saves time, reduces errors, and produces better customer outcomes. Such techniques also provide a recovery path in the event of a site failure that requires rebuilding the network.
+Building enterprise networks using automation and other programmatic techniques saves time, reduces errors, and produces better outcomes. These techniques also provide a recovery path in the event of a site failure that requires rebuilding the network.
-You should adopt a programmatic, *infrastructure as code* approach to your deployments. You can use templates or the Azure REST API to build your deployment using parameters as inputs with values that you have collected during the design phase of the project. You should save provisioning information such as SIM data, switch/router configuration, and network policies in machine-readable format so that, in the event of a failure, you can reapply the configuration in the same way as you originally did. Another best practice to recover from failure is to deploy a spare Azure Stack Edge server to minimize recovery time if the first unit fails; you can then use your saved templates and inputs to quickly recreate the deployment. For more information on deploying a network using templates, refer to [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md).
+We recommend adopting a programmatic, *infrastructure as code* approach to your deployments. You can use templates or the Azure REST API to build your deployment using parameters as inputs with values that you have collected during the design phase of the project. You should save provisioning information such as SIM data, switch/router configuration, and network policies in machine-readable format so that, in the event of a failure, you can reapply the configuration in the same way as you originally did. Another best practice to recover from failure is to deploy a spare Azure Stack Edge server to minimize recovery time if the first unit fails; you can then use your saved templates and inputs to quickly recreate the deployment. For more information on deploying a network using templates, refer to [Quickstart: Deploy a private mobile network and site - ARM template](deploy-private-mobile-network-with-site-arm-template.md).
You must also consider how you integrate other Azure products and services with the private enterprise network. These products include [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) and [role-based access control (RBAC)](../role-based-access-control/overview.md), where you must consider how tenants, subscriptions and resource permissions will align with the business model that exists between you and the enterprise, and as your own approach to customer system management. For example, you might use [Azure Blueprints](../governance/blueprints/overview.md) to set up the subscriptions and resource group model that works best for your organization.
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
Each IP address must come from the pool you assigned for static IP address alloc
| The network slice that the SIM will use. | `staticIpConfiguration.sliceId` | | The static IP address to assign to the SIM. | `staticIpConfiguration.staticIpAddress` |
-## Prepare an array for your SIMs
+## Prepare one or more arrays for your SIMs
-Use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to create a JSON array containing properties for each of the SIMs you want to provision. The following is an example of an array containing properties for two SIMs (`SIM1` and `SIM2`).
+Use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to create one or more JSON arrays containing properties for up to 500 of the SIMs you want to provision. The following is an example of an array containing properties for two SIMs (`SIM1` and `SIM2`).
+
+> [!IMPORTANT]
+> Bulk SIM provisioning is limited to 500 SIMs. If you want to provision more that 500 SIMs, you must create multiple SIM arrays with no more than 500 SIMs in any one array and repeat the provisioning process for each SIM array.
If you don't want to configure static IP addresses for a SIM, delete the `staticIpConfiguration` parameter for that SIM. If your private mobile network has multiple data networks and you want to assign a different static IP address for each data network to the same SIM, you can include additional `attachedDataNetworkId`, `sliceId` and `staticIpAddress` parameters for each IP address under `staticIpConfiguration`.
The following Azure resources are defined in the template.
- **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network. - **Existing Sim Policy Name:** enter the name of the SIM policy you want to assign to the SIMs. - **Sim Group Name:** enter the name for the new SIM group.
- - **Sim Resources:** paste in the JSON array you prepared in [Prepare an array for your SIMs](#prepare-an-array-for-your-sims).
+ - **Sim Resources:** paste in one of the JSON arrays you prepared in [Prepare one or more arrays for your SIMs](#prepare-one-or-more-arrays-for-your-sims).
:::image type="content" source="media/provision-sims-arm-template/sims-arm-template-configuration-fields.png" alt-text="Screenshot of the Azure portal showing the configuration fields for the SIMs ARM template.":::
The following Azure resources are defined in the template.
If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab. 4. Once your configuration has been validated, you can select **Create** to provision your SIMs. The Azure portal will display a confirmation screen when the SIMs have been provisioned.
+5. If you are provisioning more than 500 SIMs, repeat this process for each of your JSON arrays.
## Review deployed resources
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
- Manually entering each provisioning value into fields in the Azure portal. This option is best if you're provisioning a few SIMs.
- - Importing a JSON file containing values for one or more SIM resources. This option is best if you're provisioning a large number of SIMs. You'll need a good JSON editor if you want to use this option.
+ - Importing one or more JSON files containing values for up to 500 SIM resources each. This option is best if you're provisioning a large number of SIMs. You'll need a good JSON editor if you want to use this option.
+
+ - Importing an encrypted JSON file containing values for one or more vendor provided SIM resources. This option is required for any vendor provided SIMs. You'll need a good JSON editor if you want to edit any fields within the encrypted JSON file when using this option.
- Decide on the SIM group to which you want to add your SIMs. You can create a new SIM group when provisioning your SIMs, or you can choose an existing SIM group. See [Manage SIM groups - Azure portal](manage-sim-groups.md) for information on viewing your existing SIM groups. - If you're manually entering provisioning values, you'll add each SIM to a SIM group individually.
- - If you're using a JSON file, all SIMs in the same JSON file will be added to the same SIM group.
+ - If you're using one or more JSON or encrypted JSON files, all SIMs in the same JSON file will be added to the same SIM group.
- For each SIM you want to provision, decide whether you want to assign a SIM policy to it. If you do, you must have already created the relevant SIM policies using the instructions in [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md). SIMs can't access your private mobile network unless they have an assigned SIM policy. - If you're manually entering provisioning values, you'll need the name of the SIM policy.
- - If you're using a JSON file, you'll need the full resource ID of the SIM policy. You can collect this by navigating to the SIM Policy resource, selecting **JSON View** and copying the contents of the **Resource ID** field.
+ - If you're using one or more JSON or encrypted JSON files, you'll need the full resource ID of the SIM policy. You can collect this by navigating to the SIM Policy resource, selecting **JSON View** and copying the contents of the **Resource ID** field.
## Collect the required information for your SIMs
To begin, collect the values in the following table for each SIM you want to pro
| Value | Field name in Azure portal | JSON file parameter name | |--|--|--| | SIM name. The SIM name must only contain alphanumeric characters, dashes, and underscores. | **SIM name** | `simName` |
-| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country/region and issuer. The ICCID is optional and is a unique numerical value between 19 and 20 digits in length, beginning with 89. | **ICCID** | `integratedCircuitCardIdentifier` |
+| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. The ICCID is a unique numerical value between 19 and 20 digits in length, beginning with 89. | **ICCID** | `integratedCircuitCardIdentifier` |
| The international mobile subscriber identity (IMSI). The IMSI is a unique number (usually 15 digits) identifying a device or user in a mobile network. | **IMSI** | `internationalMobileSubscriberIdentity` | | The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. It must be a 32-character string, containing hexadecimal characters only. | **Ki** | `authenticationKey` | | The derived operator code (OPc). The OPc is taken from the SIM's Ki and the network's operator code (OP). The packet core instance uses it to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. | **Opc** | `operatorKeyCode` | | The type of device using this SIM. This value is an optional free-form string. You can use it as required to easily identify device types using the enterprise's private mobile network. | **Device type** | `deviceType` |
-| The SIM policy to assign to the SIM. This is optional, but your SIMs won't be able to use the private mobile network without an assigned SIM policy. You'll need to assign a SIM policy if you want to set static IP addresses to the SIM during provisioning. | **SIM policy** | `simPolicyId` |
+| The SIM policy to assign to the SIM. This is optional, but your SIMs won't be able to use the private mobile network without an assigned SIM policy. | **SIM policy** | `simPolicyId` |
### Collect the required information for assigning static IP addresses You only need to complete this step if all of the following apply: -- You're using a JSON file to provision your SIMs.
+- You're using one or more JSON files to provision your SIMs.
- You've configured static IP address allocation for your packet core instance(s). - You want to assign static IP addresses to the SIMs during SIM provisioning.
Each IP address must come from the pool you assigned for static IP address alloc
| The network slice that the SIM will use. | Not applicable. | `staticIpConfiguration.sliceId` | | The static IP address to assign to the SIM. | Not applicable. | `staticIpConfiguration.staticIpAddress` |
-## Create the JSON file
-
-Only carry out this step if you decided in [Prerequisites](#prerequisites) to use a JSON file to provision your SIMs. Otherwise, you can skip to [Begin provisioning the SIMs in the Azure portal](#begin-provisioning-the-sims-in-the-azure-portal).
-
-Prepare the JSON file using the information you collected for your SIMs in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims). The example file below shows the required format. It contains the parameters required to provision two SIMs (`SIM1` and `SIM2`).
-
-If you don't want to configure static IP addresses for a SIM, delete the `staticIpConfiguration` parameter for that SIM. If your private mobile network has multiple data networks and you want to assign a different static IP address for each data network to the same SIM, you can include additional `attachedDataNetworkId`, `sliceId` and `staticIpAddress` parameters for each IP address under `staticIpConfiguration`.
-
-```json
-[
- {
- "simName": "SIM1",
- "integratedCircuitCardIdentifier": "8912345678901234566",
- "internationalMobileSubscriberIdentity": "001019990010001",
- "authenticationKey": "00112233445566778899AABBCCDDEEFF",
- "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88737d",
- "deviceType": "Cellphone",
- "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy1",
- "staticIpConfiguration" :[
- {
- "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn1",
- "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
- "staticIpAddress": "10.132.124.54"
- },
- {
- "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn2",
- "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
- "staticIpAddress": "10.132.124.55"
- }
- ]
- },
- {
- "simName": "SIM2",
- "integratedCircuitCardIdentifier": "8922345678901234567",
- "internationalMobileSubscriberIdentity": "001019990010002",
- "authenticationKey": "11112233445566778899AABBCCDDEEFF",
- "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88738d",
- "deviceType": "Sensor",
- "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy2",
- "staticIpConfiguration" :[
- {
- "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn1",
- "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
- "staticIpAddress": "10.132.124.54"
- },
- {
- "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn2",
- "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
- "staticIpAddress": "10.132.124.55"
- }
- ]
- }
-]
-```
+## Create or edit JSON files
+
+Only carry out this step if you decided in [Prerequisites](#prerequisites) to use JSON files or an encrypted JSON file provided by a SIM vendor to provision your SIMs. Otherwise, you can skip to [Begin provisioning the SIMs in the Azure portal](#begin-provisioning-the-sims-in-the-azure-portal).
+
+Prepare the files using the information you collected for your SIMs in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims). The example file below shows the required format. It contains the parameters required to provision two SIMs (`SIM1` and `SIM2`).
+
+> [!IMPORTANT]
+> Bulk SIM provisioning is limited to 500 SIMs. If you want to provision more that 500 SIMs, you must create multiple JSON files with no more than 500 SIMs in any one file and repeat the provisioning process for each JSON file.
+
+- If you are creating a JSON file, use the following example. It contains the parameters required to provision two SIMs (`SIM1` and `SIM2`). If you don't want to assign a SIM policy to a SIM, you can delete the `simPolicyId` parameter for that SIM.
+
+ ```json
+ [
+ {
+ "simName": "SIM1",
+ "integratedCircuitCardIdentifier": "8912345678901234566",
+ "internationalMobileSubscriberIdentity": "001019990010001",
+ "authenticationKey": "00112233445566778899AABBCCDDEEFF",
+ "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88737d",
+ "deviceType": "Cellphone",
+ "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy1"
+ },
+ {
+ "simName": "SIM2",
+ "integratedCircuitCardIdentifier": "8922345678901234567",
+ "internationalMobileSubscriberIdentity": "001019990010002",
+ "authenticationKey": "11112233445566778899AABBCCDDEEFF",
+ "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88738d",
+ "deviceType": "Sensor",
+ "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy2"
+ }
+ ]
+ ```
+
+- If you are editing an encrypted JSON file provided by a SIM vendor, use the following example. It contains the parameters required to provision two SIMs (`SIM1` and `SIM2`).
+ - If you don't want to assign a SIM policy to a SIM, you can delete the `simPolicyId` parameter for that SIM.
+
+ ```json
+ {ΓÇ»
+ ΓÇ» "version": 1,ΓÇ»
+ ΓÇ» "azureKey": 1,ΓÇ»
+ ΓÇ» "vendorKeyFingerprint": "A5719BCEFD6A2021A11D7649942ECC14",
+ ΓÇ» "encryptedTransportKey": "0EBAE5E2D31A1BE48495F6DCA65983EEAE6BA6B75A92040562EAD84079BF701CBD3BB1602DB74E85921184820B78A02EC709951195DC87E44481FDB6B826DF775E29B7073644EA66649A14B6CA6B0EE75DE8B4A8D0D5186319E37FBF165A691E607CFF8B65F3E5E9D448049704DE4EA047101ADA4554A543F405B447B8DB687C0B7624E62515445F3E887B3328AA555540D9959752C985490586EF06681501A89594E28F98BF66F179FE3F1D2EE13C69BC42C30A8D3DC6898B8160FC66CDDEE164760F27B68A07BA4C4AE5AFFEA45EE8342E1CA8470150ED6AF4215CEF173418E60E2B1DF4A8C2AE6F0C9A291F5D185ECAD0D94D48EFD06570A0C1AE27D5EC20",ΓÇ»
+ ΓÇ» "signedTransportKey": "83515CC47C8890F62D4A0D16DE58C2F2DCFD827C317047693A46B9CA1F9EBC33CCDB8CABE04A275D65E180813CCFF43FC2DA95E19E2B9FF2588AE0914418DC9CB5506EB7AEADB272F5DAB9F0B1CCFAD62B95C91D4F4680A350F56D2A7F8EC863F4D61E1D7A38746AEE6C6391D619CCFCFA2B6D554671D91A26484CD6E120D84917FBF69D3B56B2AA8F2B36AF88492F1A7E267594B6C1596B81A81079540EC3F31869294BFEB225DFB171DE557B8C05D7C963E047E3AF36D1387FEDA28E55E411E5FB6AED178FB9C92D674D71AF8FEB6462F509E6423D4EBE0EC84E4135AA6C7A36F849A14A6A70E7188E08278D515BD95A549645E9D595D1DEC13E1A68B9CB67",ΓÇ»
+ ΓÇ» "sims": [ΓÇ»
+     { 
+       "name": "SIM 1", 
+       "properties": { 
+         "deviceType": "Sensor", 
+        "integratedCircuitCardIdentifier": "8922345678901234567", 
+         "internationalMobileSubscriberIdentity": "001019990010002", 
+         "encryptedCredentials": "3ED205BE2DD7F0E467283EC55F9E8F3588B68DC98811BE671070C65EFDE0CCCAD18C8B663231C80FB478F753A6B09142D06982421261679B7BB112D36473EA7EF973DCF7F634124B58DD945FE61D4B16978438CB33E64D3AA58B5C38A0D97030B5F95B16E308D919EB932ACCD36CB8C2838C497B3B38A60E3DD385", 
+         "simPolicy": { 
+           "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/simPolicies/MySimPolicy" 
+         }, 
+         "staticIpConfiguration": [
+ {
+ "attachedDataNetwork": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/TestPacketCoreCP/packetCoreDataPlanes/TestPacketCoreDP/attachedDataNetworks/TestAttachedDataNetwork"
+ },
+ "slice": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/slices/testSlice"
+ },
+ "staticIp": {
+ "ipv4Address": "2.4.0.1"
+ }
+           } 
+         ] 
+       } 
+     }
+ {ΓÇ»
+       "name": "SIM 2", 
+       "properties": { 
+         "deviceType": "Cellphone", 
+        "integratedCircuitCardIdentifier": "1234545678907456123", 
+         "internationalMobileSubscriberIdentity": "001019990010003", 
+         "encryptedCredentials": "3ED205BE2DD7F0E467283EC55F9E8F3588B68DC98811BE671070C65EFDE0CCCAD18C8B663231C80FB478F753A6B09142D06982421261679B7BB112D36473EA7EF973DCF7F634124B58DD945FE61D4B16978438CB33E64D3AA58B5C38A0D97030B5F95B16E308D919EB932ACCD36CB8C2838C497B3B38A60E3DD385", 
+         "simPolicy": { 
+           "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/simPolicies/MySimPolicy" 
+         }, 
+         "staticIpConfiguration": [
+ {
+ "attachedDataNetwork": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/TestPacketCoreCP/packetCoreDataPlanes/TestPacketCoreDP/attachedDataNetworks/TestAttachedDataNetwork"
+ },
+ "slice": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/slices/testSlice"
+ },
+ "staticIp": {
+ "ipv4Address": "2.4.0.2"
+ }
+           } 
+         ] 
+       } 
+     } 
+ ΓÇ» ]ΓÇ»
+ }ΓÇ»
+ ```
## Begin provisioning the SIMs in the Azure portal
In this step, you'll enter provisioning values for your SIMs directly into the A
In this step, you'll provision SIMs using a JSON file.
-1. In **Add SIMs** on the right, select **Browse** and then select the JSON file you created in [Create the JSON file](#create-the-json-file).
-1. Set the **SIM group** field to an existing SIM group, or select **Create new** to create a new one.
+1. In **Add SIMs** on the right, select **Browse** and then select one of the JSON files you created or edited in [Create or edit JSON files](#create-or-edit-json-files).
+1. Set the **SIM group** field to an existing SIM group, or select **Create new** to create a new one.
1. Select **Add**. If the **Add** button is greyed out, check your JSON file to confirm that it's correctly formatted. 1. The Azure portal will now begin deploying the SIMs. When the deployment is complete, select **Go to resource group**. :::image type="content" source="media/provision-sims-azure-portal/multiple-sim-resource-deployment.png" alt-text="Screenshot of the Azure portal. It shows a completed deployment of SIM resources through a J S O N file and the Go to resource group button."::: 1. Select the **SIM Group** resource to which you added your SIMs.
-1. Check the list of SIMs to ensure your new SIMs are present and provisioned correctly.
+1. Check the list of SIMs to ensure your new SIMs are present and provisioned correctly.
:::image type="content" source="media/provision-sims-azure-portal/sims-list.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs for a private mobile network." lightbox="media/provision-sims-azure-portal/sims-list.png":::
+1. If you are provisioning more than 500 SIMs, repeat this process for each JSON file.
+ ## Next steps If you've configured static IP address allocation for your packet core instance(s) and you haven't already assigned static IP addresses to the SIMs you've provisioned, you can do so by following the steps in [Assign static IP addresses](manage-existing-sims.md#assign-static-ip-addresses).
private-link Create Private Link Service Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-template.md
Title: 'Quickstart: Create a private link service - ARM template' description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a private link service.- Previously updated : 05/29/2020 Last updated : 03/30/2023
In this quickstart, you use an Azure Resource Manager template (ARM template) to
You can also complete this quickstart by using the [Azure portal](create-private-link-service-portal.md), [Azure PowerShell](create-private-link-service-powershell.md), or the [Azure CLI](create-private-link-service-cli.md).
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fprivatelink-service%2Fazuredeploy.json)
The template used in this quickstart is from [Azure Quickstart Templates](https:
Multiple Azure resources are defined in the template: - [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks): There's one virtual network for each virtual machine.+ - [**Microsoft.Network/loadBalancers**](/azure/templates/microsoft.network/loadBalancers): The load balancer that exposes the virtual machines that host the service.+ - [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces): There are two network interfaces, one for each virtual machine.+ - [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines): There are two virtual machines, one that hosts the service and one that tests the connection to the private endpoint.+ - [**Microsoft.Compute/virtualMachines/extensions**](/azure/templates/Microsoft.Compute/virtualMachines/extensions): The extension that installs a web server.+ - [**Microsoft.Network/privateLinkServices**](/azure/templates/microsoft.network/privateLinkServices): The private link service to expose the service.+ - [**Microsoft.Network/publicIpAddresses**](/azure/templates/microsoft.network/publicIpAddresses): There are two public IP addresses, one for each virtual machine.+ - [**Microsoft.Network/privateendpoints**](/azure/templates/microsoft.network/privateendpoints): The private endpoint to access the service. ## Deploy the template
Here's how to deploy the ARM template to Azure:
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fprivatelink-service%2Fazuredeploy.json) 2. Select or create your resource group.
-3. Type the virtual machine administrator username and password.
-4. Read the terms and conditions statement. If you agree, select **I agree to the terms and conditions stated above** > **Purchase**.
+
+3. Enter the virtual machine administrator username and password.
+
+4. Select **Review + create**.
+
+5. Select **Create**.
+
+ The deployment takes a few minutes to complete.
## Validate the deployment
Connect to the VM _myConsumerVm{uniqueid}_ from the internet as follows:
3. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (_.rdp_) file and downloads it to your computer.
-4. Open the downloaded .rdp file.
+4. Open the RDP file that was downloaded to your computer.
a. If prompted, select **Connect**.
Connect to the VM _myConsumerVm{uniqueid}_ from the internet as follows:
Here's how to connect to the http service from the VM by using the private endpoint. 1. Go to the Remote Desktop of _myConsumerVm{uniqueid}_.+ 2. Open a browser, and enter the private endpoint address: `http://10.0.0.5/`.+ 3. The default IIS page appears. ## Clean up resources
-When you no longer need the resources that you created with the private link service, delete the resource group. This removes the private link service and all the related resources.
+When you no longer need the resources that you created with the private link service, delete the resource group. This operation removes the private link service and all the related resources.
To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
Remove-AzResourceGroup -Name <your resource group name>
## Next steps - For more information on the services that support a private endpoint, see: > [!div class="nextstepaction"] > [Private Link availability](private-link-overview.md#availability)
private-link Troubleshoot Private Link Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/troubleshoot-private-link-connectivity.md
Title: Troubleshoot Azure Private Link Service connectivity problems description: Step-by-step guidance to diagnose private link connectivity---+ - Previously updated : 01/31/2020- Last updated : 03/29/2020+
This article provides step-by-step guidance to validate and diagnose connectivity for your Azure Private Link setup.
-With Azure Private Link, you can access Azure platform as a service (PaaS) services, such as Azure Storage, Azure Cosmos DB, and Azure SQL Database, and Azure hosted customer or partner services over a private endpoint in your virtual network. Traffic between your virtual network and the service traverses over the Microsoft backbone network, which eliminates exposure from the public internet. You can also create your own private link service in your virtual network and deliver it privately to your customers.
+With Azure Private Link, you can access Azure platform as a service (PaaS) services and Azure hosted customer or partner services over a private endpoint in your virtual network. Traffic between your virtual network and the service traverses over the Microsoft backbone network, which eliminates exposure from the public internet. You can also create your own private link service in your virtual network and deliver it privately to your customers.
You can enable your service that runs behind the Standard tier of Azure Load Balancer for Private Link access. Consumers of your service can create a private endpoint inside their virtual network and map it to this service to access it privately. Here are the connectivity scenarios that are available with Private Link: - Virtual network from the same region+ - Regionally peered virtual networks+ - Globally peered virtual networks+ - Customer on-premises over VPN or Azure ExpressRoute circuits ## Deployment troubleshooting
-Review the information on [Disabling network policies on the private link service](./disable-private-link-service-network-policy.md) for troubleshooting cases where you're unable to select the source IP address from the subnet of your choice for your private link service.
-
-Make sure that the setting **privateLinkServiceNetworkPolicies** is disabled for the subnet you're selecting the source IP address from.
+For more information on troubleshooting when you're unable to select the source IP address from the subnet of your choice for your private link service, see [Disabling network policies on the private link service](./disable-private-link-service-network-policy.md).
## Diagnose connectivity problems
If you experience connectivity problems with your private link setup, review the
1. Review Private Link configuration by browsing the resource. a. Go to [Private Link Center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/overview).-
- ![Private Link Center](./media/private-link-tsg/private-link-center.png)
+
+ :::image type="content" source="./media/private-link-tsg/private-link-center.png" alt-text="Screenshot of Private Link Center.":::
b. On the left pane, select **Private link services**.
- ![Private link services](./media/private-link-tsg/private-link-service.png)
+ :::image type="content" source="./media/private-link-tsg/private-link-service.png" alt-text="Screenshot of Private link services.":::
c. Filter and select the private link service that you want to diagnose. d. Review the private endpoint connections.+ - Make sure that the private endpoint that you're seeking connectivity from is listed with an **Approved** connection state.+ - If the state is **Pending**, select it and approve it.
- ![Private endpoint connections](./media/private-link-tsg/pls-private-endpoint-connections.png)
+ :::image type="content" source="./media/private-link-tsg/pls-private-endpoint-connections.png" alt-text="Screenshot of Private endpoint connections.":::
- Go to the private endpoint that you're connecting from by selecting the name. Make sure the connection status shows as **Approved**.
- ![Private endpoint connection overview](./media/private-link-tsg/pls-private-endpoint-overview.png)
+ :::image type="content" source="./media/private-link-tsg/pls-private-endpoint-overview.png" alt-text="Screenshot of private endpoint connection overview.":::
- After both sides are approved, try the connectivity again. e. Review **Alias** on the **Overview** tab and **Resource ID** on the **Properties** tab. - Make sure the **Alias** and **Resource ID** information matches the **Alias** and **Resource ID** you're using to create a private endpoint to this service.
- ![Verify Alias information](./media/private-link-tsg/pls-overview-pane-alias.png)
+ :::image type="content" source="./media/private-link-tsg/pls-overview-pane-alias.png" alt-text="Screenshot of verify alias information.":::
- ![Verify Resource ID information](./media/private-link-tsg/pls-properties-pane-resourceid.png)
+ :::image type="content" source="./media/private-link-tsg/pls-properties-pane-resourceid.png" alt-text="Screenshot of verify resource ID information.":::
f. Review **Visibility** information on the **Overview** tab.+ - Make sure that your subscription falls under the **Visibility** scope.
- ![Verify Visibility information](./media/private-link-tsg/pls-overview-pane-visibility.png)
+ :::image type="content" source="./media/private-link-tsg/pls-overview-pane-visibility.png" alt-text="Screenshot of verify visibility information.":::
g. Review **Load balancer** information on the **Overview** tab.+ - You can go to the load balancer by selecting the load balancer link.
- ![Verify Load balancer information](./media/private-link-tsg/pls-overview-pane-ilb.png)
+ :::image type="content" source="./media/private-link-tsg/pls-overview-pane-ilb.png" alt-text="Screenshot of verify load balancer information.":::
- Make sure that the load balancer settings are configured as per your expectations.+ - Review **Frontend IP configuration**.+ - Review **Backend pools**.+ - Review **Load balancing rules**.
- ![Verify load balancer properties](./media/private-link-tsg/pls-ilb-properties.png)
+ :::image type="content" source="./media/private-link-tsg/pls-ilb-properties.png" alt-text="Screenshot of verify load balancer properties.":::
- Make sure the load balancer is working as per the previous settings.+ - Select a VM in any subnet other than the subnet where the load balancer back-end pool is available.+ - Try accessing the load balancer front end from the previous VM.+ - If the connection makes it to the back-end pool as per load-balancing rules, your load balancer is operational.+ - You can also review the load balancer metric through Azure Monitor to see if data is flowing through the load balancer. 1. Use [Azure Monitor](../azure-monitor/overview.md) to see if data is flowing. a. On the private link service resource, select **Metrics**.+ - Select **Bytes In** or **Bytes Out**.+ - See if data is flowing when you attempt to connect to the private link service. Expect a delay of approximately 10 minutes.
- ![Verify private link service metrics](./media/private-link-tsg/pls-metrics.png)
+ :::image type="content" source="./media/private-link-tsg/pls-metrics.png" alt-text="Screenshot of verify private link service metrics.":::
1. Use [Azure Monitor - Networks](../network-watcher/network-insights-overview.md#resource-view) for insights and to see a resource view of the resources by going to:+ - Azure Monitor+ - Networks
- - Private Link services
- - Resource view
-![AzureMonitor](https://user-images.githubusercontent.com/20302679/135001735-56a9484b-f9b4-484b-a503-cfb9d20b264a.png)
+ - Private Link services
-![DependencyView](https://user-images.githubusercontent.com/20302679/135001741-8e848c52-d4bb-4646-b0d3-a85614ebe16c.png)
+ - Resource view
-4. Contact the [Azure Support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) team if your problem is still unresolved and a connectivity problem still exists.
+Contact the [Azure Support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) team if your problem is still unresolved and a connectivity problem still exists.
## Next steps * [Create a private link service (CLI)](./create-private-link-service-cli.md)
- * [Azure Private Endpoint troubleshooting guide](troubleshoot-private-endpoint-connectivity.md)
+
+* [Azure Private Endpoint troubleshooting guide](troubleshoot-private-endpoint-connectivity.md)
purview Catalog Private Link Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-name-resolution.md
Previously updated : 02/23/2023 Last updated : 03/29/2023 # Customer intent: As a Microsoft Purview admin, I want to set up private endpoints for my Microsoft Purview account, for secure access.
If you do not use DNS forwarders and instead you manage A records directly in yo
| `datasource.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Microsoft Purview\> | | `policy.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Microsoft Purview\> | | `sensitivity.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Microsoft Purview\> |-
+| `web.privatelink.purviewstudio.azure.com` | A | \<portal private endpoint IP address of Microsoft Purview\> |
+| `workflow.prod.ext.web.purview.azure.com` | A | \<portal private endpoint IP address of Microsoft Purview\> |
## Verify and DNS test name resolution and connectivity
route-server Quickstart Configure Route Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-cli.md
RouteServerIps : {10.5.10.4, 10.5.10.5} "virtualRouterAsn": 65515,
``` + ## Configure route exchange If you have an ExpressRoute and an Azure VPN gateway in the same virtual network and you want them to exchange routes, you can enable route exchange on the Azure Route Server.
route-server Quickstart Configure Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-portal.md
You'll need the Azure Route Server's peer IPs and ASN to complete the configurat
:::image type="content" source="./media/quickstart-configure-route-server-portal/route-server-overview.png" alt-text="Screenshot of Route Server overview page.":::
-> [!IMPORTANT]
-> To ensure that virtual network routes are advertised over the NVA connections, and to achieve high availability, we recommend peering each NVA with both Route Server instances.
## Configure route exchange
route-server Quickstart Configure Route Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-powershell.md
# Quickstart: Create and configure Route Server using Azure PowerShell
-This article helps you configure Azure Route Server to peer with a Network Virtual Appliance (NVA) in your virtual network using Azure PowerShell. Route Server will learn routes from your NVA and program them on the virtual machines in the virtual network. Azure Route Server will also advertise the virtual network routes to the NVA. For more information, see [Azure Route Server](overview.md).
+This article helps you configure Azure Route Server to peer with a Network Virtual Appliance (NVA) in your virtual network using Azure PowerShell. Route Server learns routes from your NVA and program them on the virtual machines in the virtual network. Azure Route Server also advertises the virtual network routes to the NVA. For more information, see [Azure Route Server](overview.md).
:::image type="content" source="media/quickstart-configure-route-server-portal/environment-diagram.png" alt-text="Diagram of Route Server deployment environment using the Azure PowerShell." border="false":::
$virtualNetwork = New-AzVirtualNetwork @vnet
### Add a dedicated subnet
-Azure Route Server requires a dedicated subnet named *RouteServerSubnet*. The subnet size has to be at least /27 or short prefix (such as /26 or /25) or you'll receive an error message when deploying the Route Server. Create a subnet configuration named **RouteServerSubnet** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig):
+Azure Route Server requires a dedicated subnet named *RouteServerSubnet*. The subnet size has to be at least /27 or shorter prefix (such as /26 or /25) or you'll receive an error message when deploying the Route Server. Create a subnet configuration named **RouteServerSubnet** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig):
```azurepowershell-interactive $subnet = @{
RouteServerAsn : 65515
RouteServerIps : {10.5.10.4, 10.5.10.5} ``` + ## <a name = "route-exchange"></a>Configure route exchange If you have an ExpressRoute and an Azure VPN gateway in the same virtual network and you want them to exchange routes, you can enable route exchange on the Azure Route Server.
route-server Route Injection In Spokes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-injection-in-spokes.md
Previously updated : 03/28/2023 Last updated : 03/29/2023
Azure Route Server offers a centralized point where network virtual appliances (
## Topology
-The following diagram depicts a simple hub and spoke design with a hub VNet and two spoke VNets. In the hub, a network virtual appliance and a route server have been deployed. Without the route server, user-defined routes would have to be configured in every spoke. These UDRs would usually contain a default route for 0.0.0.0/0 that sends all traffic from the spoke VNets through the NVA. This scenario can be used, for example, to inspect the traffic for security purposes.
+The following diagram depicts a simple hub and spoke design with a hub VNet and two spoke VNets. In the hub, a network virtual appliance and a Route Server have been deployed. Without the Route Server, user-defined routes would have to be configured in every spoke. These UDRs would usually contain a default route for 0.0.0.0/0 that sends all traffic from the spoke VNets through the NVA. This scenario can be used, for example, to inspect the traffic for security purposes.
:::image type="content" source="./media/scenarios/route-injection.png" alt-text="Diagram showing a basic hub and spoke topology.":::
-With the route server in the hub VNet, there's no need to use user-defined routes. The NVA advertises network prefixes to the route server, which injects them so they appear in the effective routes of any virtual machine deployed in the hub VNet or spoke VNets that are peered with the hub VNet with the setting *Use remote virtual network's gateway or Route Server*.
+With the Route Server in the hub VNet, there's no need to use user-defined routes. The NVA advertises network prefixes to the Route Server, which injects them so they appear in the effective routes of any virtual machine deployed in the hub VNet or spoke VNets that are peered with the hub VNet with the setting ***Use the remote virtual network's gateway or Route Server***.
## Connectivity to on-premises through the NVA
If the NVA is used to provide connectivity to on-premises network via IPsec VPNs
## Inspecting private traffic through the NVA
-The previous sections depict the traffic being inspected by the network virtual appliance (NVA) by injecting a `0.0.0.0/0` default route from the NVA to the route server. However, if you wish to only inspect spoke-to-spoke and spoke-to-on-premises traffic through the NVA, you should consider that Azure Route Server doesn't advertise a route that is the same or longer prefix than the virtual network address space learned from the NVA. In other words, Azure Route Server won't inject these prefixes into the virtual network and they won't be programmed on the NICs of virtual machines in the hub or spoke VNets.
+The previous sections depict the traffic being inspected by the network virtual appliance (NVA) by injecting a `0.0.0.0/0` default route from the NVA to the Route Server. However, if you wish to only inspect spoke-to-spoke and spoke-to-on-premises traffic through the NVA, you should consider that Azure Route Server doesn't advertise a route that is the same or longer prefix than the virtual network address space learned from the NVA. In other words, Azure Route Server won't inject these prefixes into the virtual network and they won't be programmed on the NICs of virtual machines in the hub or spoke VNets.
Azure Route Server, however, will advertise a larger subnet than the VNet address space that is learned from the NVA. It's possible to advertise from the NVA a supernet of what you have in your virtual network. For example, if your virtual network uses the RFC 1918 address space `10.0.0.0/16`, your NVA can advertise `10.0.0.0/8` to the Azure Route Server and these prefixes will be injected into the hub and spoke VNets. This VNet behavior is referenced in [About BGP with VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md#can-i-advertise-the-exact-prefixes-as-my-virtual-network-prefixes).
If a VPN or an ExpressRoute gateway exists in the same virtual network as the Ro
You can't configure the subnets in the spoke VNets to only learn the routes from the Azure Route Server. Disabling "Propagate gateway routes" in a route table associated to a subnet would prevent both types of routes (routes from the virtual network gateway and routes from the Azure Route Server) to be injected into NICs in that subnet.
-By default, the route server advertises all prefixes learned from the NVA to ExpressRoute too. This might not be desired, for example because of the route limits of ExpressRoute or the route server itself. In that case, the NVA can announce its routes to the route server including the BGP community `no-advertise` (with value `65535:65282`). When route server receives routes with this BGP community, it injects them to the subnets, but it will not advertise them to any other BGP peers (like ExpressRoute or VPN gateways, or other NVAs).
+By default, the Route Server advertises all prefixes learned from the NVA to ExpressRoute too. This might not be desired, for example because of the route limits of ExpressRoute or the Route Server itself. In that case, the NVA can announce its routes to the Route Server including the BGP community `no-advertise` (with value `65535:65282`). When Route Server receives routes with this BGP community, it injects them to the subnets, but it will not advertise them to any other BGP peers (like ExpressRoute or VPN gateways, or other NVAs).
## SDWAN coexistence with ExpressRoute and Azure Firewall
-A particular case of the previous design is when customers insert the Azure Firewall in the traffic flow to inspect all traffic going to on-premises networks, either via ExpressRoute or via SD-WAN/VPN appliances. In this situation, all spoke subnets have route tables that prevent the spokes from learning any route from ExpressRoute or the route server, and have default routes (0.0.0.0/0) with the Azure Firewall as next hop, as the following diagram shows:
+A particular case of the previous design is when customers insert the Azure Firewall in the traffic flow to inspect all traffic going to on-premises networks, either via ExpressRoute or via SD-WAN/VPN appliances. In this situation, all spoke subnets have route tables that prevent the spokes from learning any route from ExpressRoute or the Route Server, and have default routes (0.0.0.0/0) with the Azure Firewall as next hop, as the following diagram shows:
:::image type="content" source="./media/scenarios/route-injection-vpn-expressroute-firewall.png" alt-text="Diagram showing hub and spoke topology with on-premises connectivity via NVA for VPN and ExpressRoute where Azure Firewall does the breakout.":::
-The Azure Firewall subnet learns the routes coming from both ExpressRoute and the VPN/SDWAN NVA, and decides whether sending traffic one way or the other. As described in the previous section, if the NVA appliance advertises more than 200 routes to the route server, it should send its BGP routes marked with the BGP community `no-advertise`. This way, the SDWAN prefixes won't be injected back to on-premises via Express-Route.
+The Azure Firewall subnet learns the routes coming from both ExpressRoute and the VPN/SDWAN NVA, and decides whether sending traffic one way or the other. As described in the previous section, if the NVA appliance advertises more than 200 routes to the Route Server, it should send its BGP routes marked with the BGP community `no-advertise`. This way, the SDWAN prefixes won't be injected back to on-premises via Express-Route.
## Traffic symmetry
However, there's an alternative, more dynamic approach. It's possible using diff
:::image type="content" source="./media/scenarios/route-injection-split-route-server.png" alt-text="Diagram showing a basic hub and spoke topology with on-premises connectivity via ExpressRoute and two Route Servers.":::
-Route Server 1 in the hub is used to inject the prefixes from the SDWAN into ExpressRoute. Since the spoke VNets are peered with the hub VNet without the "Use remote virtual network's gateway or Route Server" and "Allow gateway transit" VNet peering options, the spoke VNets don't learn these routes (neither the SDWAN prefixes nor the ExpressRoute prefixes).
+Route Server 1 in the hub is used to inject the prefixes from the SDWAN into ExpressRoute. Since the spoke VNets are peered with the hub VNet without the ***Use the remote virtual network's gateway or Route Server*** (in the spoke VNet peering) and ***Use this virtual network's gateway or Route Server*** (***Gateway transit*** in the hub VNet peering), the spoke VNets don't learn these routes (neither the SDWAN prefixes nor the ExpressRoute prefixes).
-To propagate routes to the spoke VNets, the NVA uses Route Server 2, deployed in a new auxiliary VNet. The NVA will only propagate a single `0.0.0.0/0` route to Route Server 2. Since the spoke VNets are peered with this auxiliary VNet with "Use remote virtual network's gateway or Route Server" and "Allow gateway transit" VNet peering options, the `0.0.0.0/0` route will be learned by all the virtual machines in the spokes.
+To propagate routes to the spoke VNets, the NVA uses Route Server 2, deployed in a new auxiliary VNet. The NVA will only propagate a single `0.0.0.0/0` route to Route Server 2. Since the spoke VNets are peered with this auxiliary VNet with ***Use the remote virtual network's gateway or Route Server*** (in the spoke VNet peering) and ***Use this virtual network's gateway or Route Server*** (***Gateway transit*** in the hub VNet peering), the `0.0.0.0/0` route will be learned by all the virtual machines in the spokes.
The next hop for the `0.0.0.0/0` route is the NVA, so the spoke VNets still need to be peered to the hub VNet. Another important aspect to notice is that the hub VNet needs to be peered to the VNet where Route Server 2 is deployed, otherwise it won't be able to create the BGP adjacency.
This design allows automatic injection of routes in spoke VNets without interfer
* Learn more about [Azure Route Server support for ExpressRoute and Azure VPN](expressroute-vpn-support.md) * Learn how to [Configure peering between Azure Route Server and Network Virtual Appliance](tutorial-configure-route-server-with-quagga.md)++
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
Get started quickly with the [SAP on Azure Deployment Automation Framework](depl
- An Azure subscription. If you don't have an Azure subscription, you can [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Ability to [download of the SAP software](software.md) in your Azure environment.-- A [Terraform](https://www.terraform.io/) installation. For more information, also see the [Terraform on Azure documentation](/azure/developer/terraform/). - An [Azure CLI](/cli/azure/install-azure-cli) installation on your local computer.
+- An [Azure PowerShell](/powershell/azure/install-az-ps#update-the-azure-powershell-module) installation on your local computer.
- A Service Principal to use for the control plane deployment-- Optionally, if you want to use PowerShell:
- - An [Azure PowerShell](/powershell/azure/install-az-ps#update-the-azure-powershell-module) installation on your local computer.
- - The latest PowerShell modules. [Update the PowerShell module](/powershell/azure/install-az-ps#update-the-azure-powershell-module) if needed.
Some of the prerequisites may already be installed in your deployment environment. Both Cloud Shell and the deployer have Terraform and the Azure CLI installed. ## Clone the repository
sap Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/plan-deployment.md
Before you design your control plane, consider the following questions:
* How is outbound internet provided for the Virtual Machines? * Are you going to deploy Azure Firewall for outbound internet connectivity? * Are private endpoints required for storage accounts and the key vault?
-* Are you going to use a custom DNS zone for the Virtual Machines?
+* Are you going to use an existing Private DNS zone for the Virtual Machines or will you use the Control Plane for it?
* Are you going to use Azure Bastion for secure remote access to the Virtual Machines? * Are you going to use the SDAF Configuration Web Application for performing configuration and deployment activities?
-### Deployment environments
+### Control Plane
-If you're supporting multiple workload zones in a region, use a unique identifier for your deployment environment and SAP library. Don't use the same identifier as for the workload zone. For example, use `MGMT` for management purposes.
+If you're supporting multiple workload zones in a region, use a unique identifier for your control plane. Don't use the same identifier as for the workload zone. For example, use `MGMT` for management purposes.
-The automation framework also supports having the deployment environment and SAP library in separate subscriptions than the workload zones.
+The automation framework also supports having the control plane in separate subscriptions than the workload zones.
-The deployment environment provides the following
+The control plane provides the following
- Deployment VMs, which do Terraform deployments and Ansible configuration. Acts as Azure DevOps self-hosted agents. - A key vault, which contains the deployment credentials (service principals) used by Terraform when performing the deployments. - Azure Firewall for providing outbound internet connectivity.-- Azure Bastion component for providing secure remote access to the deployed Virtual Machines.-- An Azure Web Application for performing configuration and deployment activities.
+- Azure Bastion for providing secure remote access to the deployed Virtual Machines.
+- An SDAF Configuration Azure Web Application for performing configuration and deployment activities.
+
+The control plane is defined using two configuration files:
The deployment configuration file defines the region, environment name, and virtual network information. For example:
environment = "MGMT"
location = "westeurope" management_network_logical_name = "DEP01"+ management_network_address_space = "10.170.20.0/24" management_subnet_address_prefix = "10.170.20.64/28"+ firewall_deployment = true management_firewall_subnet_address_prefix = "10.170.20.0/26"+ bastion_deployment = true management_bastion_subnet_address_prefix = "10.170.20.128/26"+ webapp_subnet_address_prefix = "10.170.20.192/27" deployer_assign_subscription_permissions = true+ deployer_count = 2+ use_service_endpoint = true use_private_endpoint = true enable_firewall_for_keyvaults_and_storage = true
enable_firewall_for_keyvaults_and_storage = true
### DNS considerations
-When planning the DNS configuration for the deployment environment, consider the following questions:
+When planning the DNS configuration for the automation framework, consider the following questions:
+ - Is there an existing Private DNS that the solutions can integrate with or do you need to use a custom Private DNS zone for the deployment environment?
- Are you going to use predefined IP addresses for the Virtual Machines or let Azure assign them dynamically?
-You can integrate with exiting Private DNS Zones by providing the following values in your tfvars files:
+You can integrate with an exiting Private DNS Zone by providing the following values in your tfvars files:
```tfvars management_dns_subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
management_dns_subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
use_custom_dns_a_registration = false ```
-Without these values a Private DNS Zone will be created in the SAP Library resource group.
+Without these values, a Private DNS Zone will be created in the SAP Library resource group.
For more information, see the [in-depth explanation of how to configure the deployer](configure-control-plane.md). - ## SAP Library configuration
-The SAP library provides storage for SAP installation media, Bill of Material (BOM) files, Terraform state files and optionally a Private DNS Zone. The configuration file defines the region and environment name for the SAP library. For parameter information and examples, see [how to configure the SAP library for automation](configure-control-plane.md).
+The SAP library resource group provides storage for SAP installation media, Bill of Material (BOM) files, Terraform state files and optionally the Private DNS Zones. The configuration file defines the region and environment name for the SAP library. For parameter information and examples, see [how to configure the SAP library for automation](configure-control-plane.md).
## Workload zone planning
The default naming convention for workload zones is `[ENVIRONMENT]-[REGIONCODE]-
The `SAP01` and `SAP02` define the logical names for the Azure virtual networks, these can be used to further partition the environments. If you need two Azure Virtual Networks for the same workload zone, for example, for a multi subscription scenario where you host development environments in two subscriptions, you can use the different logical names for each virtual network. For example, `DEV-WEEU-SAP01-INFRASTRUCTURE` and `DEV-WEEU-SAP02-INFRASTRUCTURE`.
-The workload zone provides the following services for the SAP Applications:
+The workload zone provides the following shared services for the SAP Applications:
-* Azure Virtual Network, for a virtual network, subnets and network security groups.
+* Azure Virtual Network, subnets and network security groups.
* Azure Key Vault, for storing the virtual machine and SAP system credentials. * Azure Storage accounts, for Boot Diagnostics and Cloud Witness. * Shared storage for the SAP Systems either Azure Files or Azure NetApp Files.
The workload zone provides the following services for the SAP Applications:
Before you design your workload zone layout, consider the following questions: * In which regions do you need to deploy workloads?
-* How many workload zones does your scenario require (development, quality assurance, production etc)?
+* How many workload zones does your scenario require (development, quality assurance, production etc.)?
* Are you deploying into new Virtual networks or are you using existing virtual networks * How is DNS configured (integrate with existing DNS or deploy a Private DNS zone in the control plane)? * What storage type do you need for the shared storage (Azure Files NFS, Azure NetApp Files)?
For more information, see [how to configure a workload zone deployment for autom
### Windows based deployments
-When doing Windows based deployments the Virtual Machines in the workload zone's Virtual Network need to be able to communicate with Active Directory in order to join the SAP Virtual Machines to the Active Directory Domain. The provided DNS name needs to be resolvable by the Active Directory.
+When doing Windows based deployments the Virtual Machines in the workload zone's Virtual Network need to be able to communicate with Active Directory in order to join the SAP Virtual Machines to the Active Directory Domain. The provided DNS name needs to be resolvable by the Active Directory.
-The workload zone key vault must contain the following secrets:
+As SDAF won't create accounts in Active Directory the accounts need to be precreated and stored in the workload zone key vault.
-| Credential | Name | Example |
-| | -- | |
-| Account that can perform domain join activities | [IDENTIFIER]-ad-svc-account | DEV-WEEU-SAP01-ad-svc-account |
-| Password for the account that performs the domain join | [IDENTIFIER]-ad-svc-account-password | DEV-WEEU-SAP01-ad-svc-account-password |
-| sidadm account password | [IDENTIFIER]-winsidadm_password_id | DEV-WEEU-SAP01-winsidadm_password_id |
-| SID Service account password | [IDENTIFIER]-svc-sidadm-password | DEV-WEEU-SAP01-svc-sidadm-password |
+| Credential | Name | Example |
+| | -- | -- |
+| Account that can perform domain join activities | [IDENTIFIER]-ad-svc-account | DEV-WEEU-SAP01-ad-svc-account |
+| Password for the account that performs the domain join | [IDENTIFIER]-ad-svc-account-password | DEV-WEEU-SAP01-ad-svc-account-password |
+| 'sidadm' account password | [IDENTIFIER]-[SID]-win-sidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
+| SID Service account password | [IDENTIFIER]-[SID]-svc-sidadm-password | DEV-WEEU-SAP01-W01-svc-sidadm-password |
+| SQL Server Service account | [IDENTIFIER]-[SID]-sql-svc-account | DEV-WEEU-SAP01-W01-sql-svc-account |
+| SQL Server Service account password | [IDENTIFIER]-[SID]-sql-svc-password | DEV-WEEU-SAP01-W01-sql-svc-password |
+| SQL Server Agent Service account | [IDENTIFIER]-[SID]-sql-agent-account | DEV-WEEU-SAP01-W01-sql-agent-account |
+| SQL Server Agent Service account password | [IDENTIFIER]-[SID]-sql-agent-password | DEV-WEEU-SAP01-W01-sql-agent-password |
+#### DNS settings
+For High Availability scenarios a DNS record is needed in the Active Directory for the SAP Central Services cluster. The DNS record needs to be created in the Active Directory DNS zone. The DNS record name is defined as '[sid]>scs[scs instance number]cl1'. For example, `w01scs00cl1` for the cluster for the 'W01' SID using the instance number '00'.
## Credentials management The automation framework uses [Service Principals](#service-principal-creation) for infrastructure deployment. It's recommended to use different deployment credentials (service principals) for each [workload zone](#workload-zone-planning). The framework stores these credentials in the [deployer's](deployment-framework.md#deployment-components) key vault. Then, the framework retrieves these credentials dynamically during the deployment process.
The automation framework uses [Service Principals](#service-principal-creation)
The automation framework will use the workload zone key vault for storing both the automation user credentials and the SAP system credentials. The virtual machine credentials are named as follows:
-| Credential | Name | Example |
-| - | - | |
-| Private key | [IDENTIFIER]-sshkey | DEV-WEEU-SAP01-sid-sshkey |
-| Public key | [IDENTIFIER]-sshkey-pub | DEV-WEEU-SAP01-sid-sshkey-pub |
-| Username | [IDENTIFIER]-username | DEV-WEEU-SAP01-sid-username |
-| Password | [IDENTIFIER]-password | DEV-WEEU-SAP01-sid-password |
-| sidadm Password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
-| sidadm account password | [IDENTIFIER]-winsidadm_password_id | DEV-WEEU-SAP01-winsidadm_password_id |
-| SID Service account password | [IDENTIFIER]-svc-sidadm-password | DEV-WEEU-SAP01-svc-sidadm-password |
+| Credential | Name | Example |
+| - | - | - |
+| Private key | [IDENTIFIER]-sshkey | DEV-WEEU-SAP01-sid-sshkey |
+| Public key | [IDENTIFIER]-sshkey-pub | DEV-WEEU-SAP01-sid-sshkey-pub |
+| Username | [IDENTIFIER]-username | DEV-WEEU-SAP01-sid-username |
+| Password | [IDENTIFIER]-password | DEV-WEEU-SAP01-sid-password |
+| sidadm Password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
+| sidadm account password | [IDENTIFIER]-[SID]-winsidadm_password_id | DEV-WEEU-SAP01-W01-winsidadm_password_id |
+| SID Service account password | [IDENTIFIER]-[SID]-svc-sidadm-password | DEV-WEEU-SAP01-W01-svc-sidadm-password |
### Service principal creation Create your service principal:
-1. Sign in to the [Azure CLI](/cli/azure/) with an account that has adequate privileges to create a Service Principal.
+1. Sign in to the [Azure CLI](/cli/azure/) with an account that has permissions to create a Service Principal.
1. Create a new Service Principal by running the command `az ad sp create-for-rbac`. Make sure to use a description name for `--name`. For example: ```azurecli az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" --name="DEV-Deployment-Account"
In a locked down environment, you might need to assign another permissions to th
#### Required permissions
-The following table shows the required permissions for the service principal:
+The following table shows the required permissions for the service principals:
> [!div class="mx-tdCol2BreakAll "] > | Credential | Area | Required permissions |
The following table shows the required permissions for the service principal:
> | Control Plane SPN | Control Plane subscription | Contributor | > | Workload Zone SPN | Target subscription | Contributor | > | Workload Zone SPN | Control plane subscription | Reader |
+> | Workload Zone SPN | Control Plane Virtual Network | Network Contributor |
> | Workload Zone SPN | SAP Library tfstate storage account | Storage Account Contributor | > | Workload Zone SPN | SAP Library sapbits storage account | Reader | > | Workload Zone SPN | Private DNS Zone | Private DNS Zone Contributor |
-> | Workload Zone SPN | Control Plane Virtual Network | Network Contributor |
> | Web Application Identity | Target subscription | Reader | > | Cluster Virtual Machine Identity | Resource Group | Fencing role |
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
| Appliance Template | Date | Description | Creation Link | | | - | -- | - | | [**SAP S/4HANA 2022, Fully-Activated Appliance**]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4e6b3ba-ba8f-485f-813f-be27ed5c8311) | December 15 2022 |This appliance contains SAP S/4HANA 2022 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
-| [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/7bd4548f-a95b-4ee9-910a-08c74b4f6c37) | June 21 2021 |The SAP ABAP Platform on SAP HANA gives you access to SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements ΓÇô including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=7bd4548f-a95b-4ee9-910a-08c74b4f6c37provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
+| [**SAP S/4HANA 2022 FPS01**]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/d6215ddc-67c6-4bdc-8df7-302367f8e016) | February 28 2023 | This solution comes as a standard S/4HANA system installation including a remote desktop for easy frontend access. It contains a pre-configured and activated SAP S/4HANA Fiori UI in client 100, with prerequisite components activated as per SAP note 3282460 - Composite SAP note: Rapid Activation for SAP Fiori in SAP S/4HANA 2022 FPS01. | [Create Appliance](https://cal.sap.com/registration?sguid=d6215ddc-67c6-4bdc-8df7-302367f8e016&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8)
[**SAP S/4HANA 2021 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | July 19 2022 | This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Management (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 3 2018 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
| [**SAP S/4HANA 2022**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/c4aff915-1af8-4d45-b370-0b38a079f9bc) | December 4 2022 | This solution comes as a standard S/4HANA system installation including a remote desktop for easy frontend access. It contains a pre-configured and activated SAP S/4HANA Fiori UI in client 100, with prerequisite components activated as per SAP note 3166600 Composite SAP note: Rapid Activation for SAP Fiori in SAP S/4HANA 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP Focused Run 4.0 SP00 (configured)**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4a6643f-2731-486c-af82-0508396650b7) | January 19 2023 |SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics. | [Create Appliance](https://cal.sap.com/registration?sguid=f4a6643f-2731-486c-af82-0508396650b7&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 3 2018 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| [**SAP BW/4HANA 2021 including BW/4HANA Content 2.0 SP08 - Dev Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/06725b24-b024-4757-860d-ac2db7b49577)| May 11 2022 | This solution offers you an insight of SAP BW/4HANA. SAP BW/4HANA is the next generation Data Warehouse optimized for HANA. Beside the basic BW/4HANA options the solution offers a bunch of HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. As the system is pre-configured you can start directly implementing your scenarios. | [Create Appliance](https://cal.sap.com/registration?sguid=06725b24-b024-4757-860d-ac2db7b49577&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
The following links highlight the Product stacks that you can quickly deploy on
| -- | : | | **SAP S/4HANA 2022 FPS00 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=3b1dc287-c865-4f79-b9ed-d5ec2dc755e9&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) | |This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/3b1dc287-c865-4f79-b9ed-d5ec2dc755e9) |
-| **SAP S/4HANA 2021 FPS02 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=4d5f19a7-d3cb-4d47-9f44-0a9e133b11de&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) |
-|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/4d5f19a7-d3cb-4d47-9f44-0a9e133b11de) |
-| **SAP S/4HANA 2021 FPS01 for Productive Deployments** | [Deploy System](https://cal.sap.com/catalog#/products) |
-|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/1c796928-0617-490b-a87d-478568a49628) |
-| **SAP S/4HANA 2021 FPS00 for Productive Deployments** | [Deploy System](https://cal.sap.com/catalog#/products) |
-|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/108febf9-5e7b-4e47-a64d-231b6c4c821d) |
-| **SAP S/4HANA 2020 FPS04 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=615c5c18-5226-4dcb-b0ab-19d0141baf9b&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) |
-|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/615c5c18-5226-4dcb-b0ab-19d0141baf9b) |
+| **SAP S/4HANA 2021 FPS03 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=6921f2f8-169b-45bb-9e0b-d89b4abee1f3&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) |
+|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/6921f2f8-169b-45bb-9e0b-d89b4abee1f3) |
+| **SAP S/4HANA 2021 FPS02 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=4d5f19a7-d3cb-4d47-9f44-0a9e133b11de&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) |
+|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/4d5f19a7-d3cb-4d47-9f44-0a9e133b11de) |
+| **SAP S/4HANA 2021 FPS01 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=1c796928-0617-490b-a87d-478568a49628&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) |
+|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/1c796928-0617-490b-a87d-478568a49628)|
+| **SAP S/4HANA 2021 FPS00 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=108febf9-5e7b-4e47-a64d-231b6c4c821d&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) |
+|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/108febf9-5e7b-4e47-a64d-231b6c4c821d) |
+| **SAP S/4HANA 2020 FPS04 for Productive Deployments** | [Deploy System](https://cal.sap.com/registration?sguid=615c5c18-5226-4dcb-b0ab-19d0141baf9b&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8&provType=pd) |
+|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/products/615c5c18-5226-4dcb-b0ab-19d0141baf9b) |
sap Hana Vm Troubleshoot Scale Out Ha On Sles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-troubleshoot-scale-out-ha-on-sles.md
[azure-linux-multiple-nics]:../../virtual-machines/linux/multiple-nics.md [suse-cloud-netconfig]:https://www.suse.com/c/multi-nic-cloud-netconfig-ec2-azure/ [sap-list-port-numbers]:https://help.sap.com/viewer/ports
-[sles-12-ha-paper]:https://www.suse.com/documentation/sle-ha-12/pdfdoc/book_sleha/book_sleha.pdf
[sles-zero-downtime-paper]:https://www.youtube.com/embed/0FW3J6GbxOk [sap-nw-ha-guide-sles]:high-availability-guide-suse.md [sles-12-for-sap]:https://www.scribd.com/document/377847444/Suse-Linux-Enterprise-Server-for-Sap-Applications-12-Sp1
Every VM in the cluster has three vNICs that correspond to the number of subnets
To verify that SAP HANA is configured correctly to use multiple networks, run the following commands. First check on the OS level that all three internal IP addresses for all three subnets are active. If you defined the subnets with different IP address ranges, you have to adapt the commands:
-<pre><code>
-ifconfig | grep "inet addr:10\."
-</code></pre>
+```bash
+sudo ifconfig | grep "inet addr:10\."
+```
The following sample output is from the second worker node on site 2. You can see three different internal IP addresses from eth0, eth1, and eth2:
-<pre><code>
+```output
inet addr:10.0.0.42 Bcast:10.0.0.255 Mask:255.255.255.0 inet addr:10.0.1.42 Bcast:10.0.1.255 Mask:255.255.255.0 inet addr:10.0.2.42 Bcast:10.0.2.255 Mask:255.255.255.0
-</code></pre>
+```
Next, verify the SAP HANA ports for the name server and HSR. SAP HANA should listen on the corresponding subnets. Depending on the SAP HANA instance number, you have to adapt the commands. For the test system, the instance number was **00**. There are different ways to find out which ports are used. The following SQL statement returns the instance ID, instance number, and other information:
-<pre><code>
+```sql
select * from "SYS"."M_SYSTEM_OVERVIEW"
-</code></pre>
+```
To find the correct port numbers, you can look, for example, in HANA Studio under **Configuration** or via a SQL statement:
-<pre><code>
+```sql
select * from M_INIFILE_CONTENTS WHERE KEY LIKE 'listen%'
-</code></pre>
+```
To find every port that's used in the SAP software stack including SAP HANA, search [TCP/IP ports of all SAP products][sap-list-port-numbers].
Given the instance number **00** in the SAP HANA 2.0 test system, the port numbe
Check the name server port:
-<pre><code>
-nc -vz 10.0.0.40 30001
-nc -vz 10.0.1.40 30001
-nc -vz 10.0.2.40 30001
-</code></pre>
+```bash
+sudo nc -vz 10.0.0.40 30001
+sudo nc -vz 10.0.1.40 30001
+sudo nc -vz 10.0.2.40 30001
+```
To prove that the internode communication uses subnet **10.0.2.0/24**, the result should look like the following sample output. Only the connection via subnet **10.0.2.0/24** should succeed:
-<pre><code>
+```output
nc: connect to 10.0.0.40 port 30001 (tcp) failed: Connection refused nc: connect to 10.0.1.40 port 30001 (tcp) failed: Connection refused Connection to 10.0.2.40 30001 port [tcp/pago-services1] succeeded!
-</code></pre>
+```
Now check for HSR port **40002**:
-<pre><code>
-nc -vz 10.0.0.40 40002
-nc -vz 10.0.1.40 40002
-nc -vz 10.0.2.40 40002
-</code></pre>
+```bash
+sudo nc -vz 10.0.0.40 40002
+sudo nc -vz 10.0.1.40 40002
+sudo nc -vz 10.0.2.40 40002
+```
To prove that the HSR communication uses subnet **10.0.1.0/24**, the result should look like the following sample output. Only the connection via subnet **10.0.1.0/24** should succeed:
-<pre><code>
+```output
nc: connect to 10.0.0.40 port 40002 (tcp) failed: Connection refused Connection to 10.0.1.40 40002 port [tcp/*] succeeded! nc: connect to 10.0.2.40 port 40002 (tcp) failed: Connection refused
-</code></pre>
+```
## Corosync
-The **corosync** config file has to be correct on every node in the cluster including the majority maker node. If the cluster join of a node doesn't work as expected, create or copy **/etc/corosync/corosync.conf** manually onto all nodes and restart the service.
+The **corosync** config file has to be correct on every node in the cluster including the majority maker node. If the cluster join of a node doesn't work as expected, create or copy `/etc/corosync/corosync.conf` manually onto all nodes and restart the service.
The content of **corosync.conf** from the test system is an example. The first section is **totem**, as described in [Cluster installation](./high-availability-guide-suse-pacemaker.md#install-the-cluster), step 11. You can ignore the value for **mcastaddr**. Just keep the existing entry. The entries for **token** and **consensus** must be set according to [Microsoft Azure SAP HANA documentation][sles-pacemaker-ha-guide].
-<pre><code>
+```config
totem { version: 2 secauth: on
totem {
transport: udpu }
-</code></pre>
+```
The second section, **logging**, wasn't changed from the given defaults:
-<pre><code>
+```config
logging { fileline: off to_stderr: no
logging {
debug: off } }
-</code></pre>
+```
The third section shows the **nodelist**. All nodes of the cluster have to show up with their **nodeid**:
-<pre><code>
+```config
nodelist { node { ring0_addr:hso-hana-vm-s1-0
nodelist {
nodeid: 7 } }
-</code></pre>
+```
In the last section, **quorum**, it's important to set the value for **expected_votes** correctly. It must be the number of nodes including the majority maker node. And the value for **two_node** has to be **0**. Don't remove the entry completely. Just set the value to **0**.
-<pre><code>
+```config
quorum { # Enable and configure quorum subsystem (default: off) # see also corosync.conf.5 and votequorum.5
quorum {
expected_votes: 7 two_node: 0 }
-</code></pre>
-
+```
Restart the service via **systemctl**:
-<pre><code>
+```bash
systemctl restart corosync
-</code></pre>
+```
How to set up an SBD device on an Azure VM is described in [SBD fencing](./high-
First, check on the SBD server VM if there are ACL entries for every node in the cluster. Run the following command on the SBD server VM:
-<pre><code>
-targetcli ls
-</code></pre>
+```bash
+sudo targetcli ls
+```
On the test system, the output of the command looks like the following sample. ACL names like **iqn.2006-04.hso-db-0.local:hso-db-0** must be entered as the corresponding initiator names on the VMs. Every VM needs a different one.
-<pre><code>
+```output
| | o- sbddbhso ................................................................... [/sbd/sbddbhso (50.0MiB) write-thru activated] | | o- alua ................................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
On the test system, the output of the command looks like the following sample. A
| | o- iqn.2006-04.hso-db-5.local:hso-db-5 .................................................................. [Mapped LUNs: 1] | | | o- mapped_lun0 ............................................................................. [lun0 fileio/sbddbhso (rw)] | | o- iqn.2006-04.hso-db-6.local:hso-db-6 .................................................................. [Mapped LUNs: 1]
-</code></pre>
+```
Then check that the initiator names on all the VMs are different and correspond to the previously shown entries. This example is from worker node 1 on site 1:
-<pre><code>
-cat /etc/iscsi/initiatorname.iscsi
-</code></pre>
+```bash
+sudo cat /etc/iscsi/initiatorname.iscsi
+```
The output looks like the following sample:
-<pre><code>
+```output
## ## /etc/iscsi/iscsi.initiatorname ##
The output looks like the following sample:
## may reject this initiator. The InitiatorName must be unique ## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames. InitiatorName=iqn.2006-04.hso-db-1.local:hso-db-1
-</code></pre>
+```
Next, verify that the **discovery** works correctly. Run the following command on every cluster node by using the IP address of the SBD server VM:
-<pre><code>
-iscsiadm -m discovery --type=st --portal=10.0.0.19:3260
-</code></pre>
+```bash
+sudo iscsiadm -m discovery --type=st --portal=10.0.0.19:3260
+```
The output should look like the following sample:
-<pre><code>
+```output
10.0.0.19:3260,1 iqn.2006-04.dbhso.local:dbhso
-</code></pre>
+```
The next proof point is to verify that the node sees the SDB device. Check it on every node including the majority maker node:
-<pre><code>
-lsscsi | grep dbhso
-</code></pre>
+```bash
+sudo lsscsi | grep dbhso
+```
The output should look like the following sample. However, the names might differ. The device name might also change after the VM reboots:
-<pre><code>
+```output
[6:0:0:0] disk LIO-ORG sbddbhso 4.0 /dev/sdm
-</code></pre>
+```
Depending on the status of the system, it sometimes helps to restart the iSCSI services to resolve issues. Then run the following commands:
-<pre><code>
-systemctl restart iscsi
-systemctl restart iscsid
-</code></pre>
+```bash
+sudo systemctl restart iscsi
+sudo systemctl restart iscsid
+```
From any node, you can check if all nodes are **clear**. Make sure that you use the correct device name on a specific node:
-<pre><code>
-sbd -d /dev/sdm list
-</code></pre>
+```bash
+sudo sbd -d /dev/sdm list
+```
The output should show **clear** for every node in the cluster:
-<pre><code>
+```output
0 hso-hana-vm-s1-0 clear 1 hso-hana-vm-s2-2 clear 2 hso-hana-vm-s2-1 clear
The output should show **clear** for every node in the cluster:
4 hso-hana-vm-s1-1 clear 5 hso-hana-vm-s2-0 clear 6 hso-hana-vm-s1-2 clear
-</code></pre>
+```
Another SBD check is the **dump** option of the **sbd** command. In this sample command and output from the majority maker node, the device name was **sdd**, not **sdm**:
-<pre><code>
-sbd -d /dev/sdd dump
-</code></pre>
-
+```bash
+sudo sbd -d /dev/sdd dump
+```
The output, apart from the device name, should look the same on all nodes:
-<pre><code>
+```output
==Dumping header on disk /dev/sdd Header version : 2.1 UUID : 9fa6cc49-c294-4f1e-9527-c973f4d5a5b0
Timeout (allocate) : 2
Timeout (loop) : 1 Timeout (msgwait) : 120 ==Header on disk /dev/sdd is dumped
-</code></pre>
+```
One more check for SBD is the possibility to send a message to another node. To send a message to worker node 2 on site 2, run the following command on worker node 1 on site 2:
-<pre><code>
+```bash
sbd -d /dev/sdm message hso-hana-vm-s2-2 test
-</code></pre>
+```
On the target VM side, **hso-hana-vm-s2-2** in this example, you can find the following entry in **/var/log/messages**:
-<pre><code>
+```output
/dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68: notice: servant: Received command test from hso-hana-vm-s2-1 on disk /dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68
-</code></pre>
+```
-Check that the entries in **/etc/sysconfig/sbd** correspond to the description in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](./high-availability-guide-suse-pacemaker.md#sbd-with-an-iscsi-target-server). Verify that the startup setting in **/etc/iscsi/iscsid.conf** is set to automatic.
+Check that the entries in `/etc/sysconfig/sbd` correspond to the description in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](./high-availability-guide-suse-pacemaker.md#sbd-with-an-iscsi-target-server). Verify that the startup setting in `/etc/iscsi/iscsid.conf` is set to automatic.
-The following entries are important in **/etc/sysconfig/sbd**. Adapt the **id** value if necessary:
+The following entries are important in `/etc/sysconfig/sbd`. Adapt the **id** value if necessary:
-<pre><code>
+```config
SBD_DEVICE="/dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68;" SBD_PACEMAKER=yes SBD_STARTMODE=always SBD_WATCHDOG=yes
-</code></pre>
+```
-Check the startup setting in **/etc/iscsi/iscsid.conf**. The required setting should have happened with the following **iscsiadm** command, described in the documentation. Verify and adapt it manually with **vi** if it's different.
+Check the startup setting in `/etc/iscsi/iscsid.conf`. The required setting should have happened with the following **iscsiadm** command, described in the documentation. Verify and adapt it manually with **vi** if it's different.
This command sets startup behavior:
-<pre><code>
-iscsiadm -m node --op=update --name=node.startup --value=automatic
-</code></pre>
+```bash
+sudo iscsiadm -m node --op=update --name=node.startup --value=automatic
+```
Make this entry in **/etc/iscsi/iscsid.conf**:
-<pre><code>
+```output
node.startup = automatic
-</code></pre>
+```
During testing and verification, after the restart of a VM, the SBD device wasn't visible anymore in some cases. There was a discrepancy between the startup setting and what YaST2 showed. To check the settings, take these steps:
During testing and verification, after the restart of a VM, the SBD device wasn'
After everything is set up correctly, you can run the following command on every node to check the status of the Pacemaker service:
-<pre><code>
-systemctl status pacemaker
-</code></pre>
+```bash
+sudo systemctl status pacemaker
+```
The top of the output should look like the following sample. It's important that the status after **Active** is shown as **loaded** and **active (running)**. The status after **Loaded** must be shown as **enabled**.
-<pre><code>
+```output
pacemaker.service - Pacemaker High Availability Cluster Manager Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-09-07 05:56:27 UTC; 4 days ago
The top of the output should look like the following sample. It's important that
Γö£ΓöÇ4502 /usr/lib/pacemaker/attrd Γö£ΓöÇ4503 /usr/lib/pacemaker/pengine ΓööΓöÇ4504 /usr/lib/pacemaker/crmd
-</code></pre>
+```
If the setting is still on **disabled**, run the following command:
-<pre><code>
-systemctl enable pacemaker
-</code></pre>
+```bash
+sudo systemctl enable pacemaker
+```
To see all configured resources in Pacemaker, run the following command:
-<pre><code>
-crm status
-</code></pre>
+```bash
+sudo crm status
+```
The output should look like the following sample. It's fine that the **cln** and **msl** resources are shown as stopped on the majority maker VM, **hso-hana-dm**. There's no SAP HANA installation on the majority maker node. So the **cln** and **msl** resources are shown as stopped. It's important that it shows the correct total number of VMs, **7**. All VMs that are part of the cluster must be listed with the status **Online**. The current primary master node must be recognized correctly. In this example, it's **hso-hana-vm-s1-0**:
-<pre><code>
+```output
Stack: corosync Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum Last updated: Tue Sep 11 15:56:40 2018
Full list of resources:
Resource Group: g_ip_HSO_HDB00 rsc_ip_HSO_HDB00 (ocf::heartbeat:IPaddr2): Started hso-hana-vm-s1-0 rsc_nc_HSO_HDB00 (ocf::heartbeat:anything): Started hso-hana-vm-s1-0
-</code></pre>
+```
An important feature of Pacemaker is maintenance mode. In this mode, you can make modifications without provoking an immediate cluster action. An example is a VM reboot. A typical use case would be planned OS or Azure infrastructure maintenance. See [Planned maintenance](#planned-maintenance). Use the following command to put Pacemaker into maintenance mode:
-<pre><code>
-crm configure property maintenance-mode=true
-</code></pre>
+```bash
+sudo crm configure property maintenance-mode=true
+```
When you check with **crm status**, you notice in the output that all resources are marked as **unmanaged**. In this state, the cluster doesn't react on any changes like starting or stopping SAP HANA. The following sample shows the output of the **crm status** command while the cluster is in maintenance mode:
-<pre><code>
+```output
Stack: corosync Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum Last updated: Wed Sep 12 07:48:10 2018
Full list of resources:
Resource Group: g_ip_HSO_HDB00 rsc_ip_HSO_HDB00 (ocf::heartbeat:IPaddr2): Started hso-hana-vm-s2-0 (unmanaged) rsc_nc_HSO_HDB00 (ocf::heartbeat:anything): Started hso-hana-vm-s2-0 (unmanaged)
-</code></pre>
+```
This command sample shows how to end the cluster maintenance mode:
-<pre><code>
-crm configure property maintenance-mode=false
-</code></pre>
+```bash
+sudo crm configure property maintenance-mode=false
+```
Another **crm** command gets the complete cluster configuration into an editor, so you can edit it. After it saves the changes, the cluster starts appropriate actions:
-<pre><code>
-crm configure edit
-</code></pre>
+```bash
+sudo crm configure edit
+```
To look at the complete cluster configuration, use the **crm show** option:
-<pre><code>
-crm configure show
-</code></pre>
+```bash
+sudo crm configure show
+```
After failures of cluster resources, the **crm status** command shows a list of **Failed Actions**. See the following sample of this output:
-<pre><code>
+```output
Stack: corosync Current DC: hso-hana-dm (version 1.1.16-4.8-77ea74d) - partition with quorum Last updated: Thu Sep 13 07:30:44 2018
Full list of resources:
Failed Actions: * rsc_SAPHanaCon_HSO_HDB00_monitor_60000 on hso-hana-vm-s2-0 'unknown error' (1): call=86, status=complete, exitreason='none', last-rc-change='Wed Sep 12 17:01:28 2018', queued=0ms, exec=277663ms
-</code></pre>
+```
It's necessary to do a cluster cleanup after failures. Use the **crm** command again, and use the command option **cleanup** to get rid of these failed action entries. Name the corresponding cluster resource as follows:
-<pre><code>
-crm resource cleanup rsc_SAPHanaCon_HSO_HDB00
-</code></pre>
-
+```bash
+sudo crm resource cleanup rsc_SAPHanaCon_HSO_HDB00
+```
The command should return output like the following sample:
-<pre><code>
+```output
Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-dm Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s1-0 Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s1-1
Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s2-0
Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s2-1 Cleaned up rsc_SAPHanaCon_HSO_HDB00:0 on hso-hana-vm-s2-2 Waiting for 7 replies from the CRMd....... OK
-</code></pre>
+```
As discussed in [Important notes](#important-notes), you shouldn't use a standar
The following three sample commands can force a cluster failover:
-<pre><code>
-echo c &gt /proc/sysrq-trigger
+```bash
+sudo echo c &gt /proc/sysrq-trigger
-crm resource migrate msl_SAPHanaCon_HSO_HDB00 hso-hana-vm-s2-0 force
+sudo crm resource migrate msl_SAPHanaCon_HSO_HDB00 hso-hana-vm-s2-0 force
-wicked ifdown eth0
-wicked ifdown eth1
-wicked ifdown eth2
+sudo wicked ifdown eth0
+sudo wicked ifdown eth1
+sudo wicked ifdown eth2
......
-wicked ifdown eth&ltn&gt
-</code></pre>
+sudo wicked ifdown eth&ltn&gt
+```
As described in [Planned maintenance](#planned-maintenance), a good way to monitor the cluster activities is to run **SAPHanaSR-showAttr** with the **watch** command:
-<pre><code>
-watch SAPHanaSR-showAttr
-</code></pre>
+```bash
+sudo watch SAPHanaSR-showAttr
+```
It also helps to look at the SAP HANA landscape status coming from an SAP Python script. The cluster setup is looking for this status value. It becomes clear when you think about a worker node failure. If a worker node goes down, SAP HANA doesn't immediately return an error for the health of the whole scale-out system.
There are some retries to avoid unnecessary failovers. The cluster reacts only i
You can monitor the SAP HANA landscape health status as user **\<HANA SID\>adm** by calling the SAP Python script as follows. You might have to adapt the path:
-<pre><code>
-watch python /hana/shared/HSO/exe/linuxx86_64/HDB_2.00.032.00.1533114046_eeaf4723ec52ed3935ae0dc9769c9411ed73fec5/python_support/landscapeHostConfiguration.py
-</code></pre>
-
+```bash
+sudo watch python /hana/shared/HSO/exe/linuxx86_64/HDB_2.00.032.00.1533114046_eeaf4723ec52ed3935ae0dc9769c9411ed73fec5/python_support/landscapeHostConfiguration.py
+```
The output of this command should look like the following sample. The **Host Status** column and the **overall host status** are both important. The actual output is wider, with additional columns. To make the output table more readable within this document, most columns on the right side were stripped:
-<pre><code>
+```output
| Host | Host | Host | Failover | Remove | | | Active | Status | Status | Status | | | | | | |
To make the output table more readable within this document, most columns on the
| hso-hana-vm-s2-2 | yes | ok | | | overall host status: ok
-</code></pre>
+```
There's another command to check current cluster activities. See the following command and the output tail after the master node of the primary site was killed. You can see the list of transition actions like **promoting** the former secondary master node, **hso-hana-vm-s2-0**, as the new primary master. If everything is fine, and all activities are finished, this **Transition Summary** list has to be empty.
-<pre><code>
+```bash
crm_simulate -Ls-
+```
+```output
........... Transition Summary:
Transition Summary:
* Promote rsc_SAPHanaCon_HSO_HDB00:5 (Slave -> Master hso-hana-vm-s2-0) * Move rsc_ip_HSO_HDB00 (Started hso-hana-vm-s1-0 -> hso-hana-vm-s2-0) * Move rsc_nc_HSO_HDB00 (Started hso-hana-vm-s1-0 -> hso-hana-vm-s2-0)
-</code></pre>
+```
Migrating a resource adds an entry to the cluster configuration. An example is f
First, force a cluster failover by migrating the **msl** resource to the current secondary master node. This command gives a warning that a **move constraint** was created:
-<pre><code>
-crm resource migrate msl_SAPHanaCon_HSO_HDB00 force
-
+```bash
+sudo crm resource migrate msl_SAPHanaCon_HSO_HDB00 force
+```
+```output
INFO: Move constraint created for msl_SAPHanaCon_HSO_HDB00
-</code></pre>
+```
Check the failover process via the command **SAPHanaSR-showAttr**. To monitor the cluster status, open a dedicated shell window and start the command with **watch**:
-<pre><code>
-watch SAPHanaSR-showAttr
-</code></pre>
+```bash
+sudo watch SAPHanaSR-showAttr
+```
The output should show the manual failover. The former secondary master node got **promoted**, in this sample, **hso-hana-vm-s2-0**. The former primary site was stopped, **lss** value **1** for former primary master node **hso-hana-vm-s1-0**:
-<pre><code>
+```output
Global cib-time prim sec srHook sync_state global Wed Sep 12 07:40:02 2018 HSOS2 - SFAIL SFAIL
hso-hana-vm-s1-2 DEMOTED online slave::worker: -10000 HSOS
hso-hana-vm-s2-0 PROMOTED online master1:master:worker:master 150 HSOS2 hso-hana-vm-s2-1 DEMOTED online slave:slave:worker:slave -10000 HSOS2 hso-hana-vm-s2-2 DEMOTED online slave:slave:worker:slave -10000 HSOS2
-</code></pre>
-
+```
After the cluster failover and SAP HANA takeover, put the cluster into maintenance mode as described in [Pacemaker](#pacemaker). The commands **SAPHanaSR-showAttr** and **crm status** don't indicate anything about the constraints created by the resource migration. One option to make these constraints visible is to show the complete cluster resource configuration with the following command:
-<pre><code>
-crm configure show
-</code></pre>
+```bash
+sudo crm configure show
+```
Within the cluster configuration, you find a new location constraint caused by the former manual resource migration. This example entry starts with **location cli-**:
-<pre><code>
+```output
location cli-ban-msl_SAPHanaCon_HSO_HDB00-on-hso-hana-vm-s1-0 msl_SAPHanaCon_HSO_HDB00 role=Started -inf: hso-hana-vm-s1-0
-</code></pre>
+```
Unfortunately, such constraints might impact the overall cluster behavior. So it's mandatory to remove them again before you bring the whole system back up. With the **unmigrate** command, it's possible to clean up the location constraints that were created before. The naming might be a bit confusing. It doesn't try to migrate the resource back to the original VM from which it was migrated. It just removes the location constraints and also returns corresponding information when you run the command:
-<pre><code>
+```bash
crm resource unmigrate msl_SAPHanaCon_HSO_HDB00-
+```
+```output
INFO: Removed migration constraints for msl_SAPHanaCon_HSO_HDB00
-</code></pre>
+```
At the end of the maintenance work, you stop the cluster maintenance mode as shown in [Pacemaker](#pacemaker).
At the end of the maintenance work, you stop the cluster maintenance mode as sho
To analyze Pacemaker cluster issues, it's helpful and also requested by SUSE support to run the **hb_report** utility. It collects all the important log files that you need to analyze what happened. This sample call uses a start and end time where a specific incident occurred. Also see [Important notes](#important-notes):
-<pre><code>
-hb_report -f "2018/09/13 07:36" -t "2018/09/13 08:00" /tmp/hb_report_log
-</code></pre>
+```bash
+sudo hb_report -f "2018/09/13 07:36" -t "2018/09/13 08:00" /tmp/hb_report_log
+```
The command tells you where it put the compressed log files:
-<pre><code>
+```output
The report is saved in /tmp/hb_report_log.tar.bz2 Report timespan: 09/13/18 07:36:00 - 09/13/18 08:00:00
-</code></pre>
+```
You can then extract the individual files via the standard **tar** command:
-<pre><code>
-tar -xvf hb_report_log.tar.bz2
-</code></pre>
+```bash
+sudo tar -xvf hb_report_log.tar.bz2
+```
When you look at the extracted files, you find all the log files. Most of them were put in separate directories for every node in the cluster:
-<pre><code>
+```output
-rw-r--r-- 1 root root 13655 Sep 13 09:01 analysis.txt -rw-r--r-- 1 root root 14422 Sep 13 09:01 description.txt -rw-r--r-- 1 root root 0 Sep 13 09:01 events.txt
drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s2-0
drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s2-1 drwxr-xr-x 3 root root 4096 Sep 13 09:01 hso-hana-vm-s2-2 -rw-r--r-- 1 root root 264726 Sep 13 09:00 journal.log
-</code></pre>
+```
Within the time range that was specified, the current master node **hso-hana-vm-s1-0** was killed. You can find entries related to this event in the **journal.log**:
-<pre><code>
+```output
2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 su[93494]: (to hsoadm) root on none 2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 su[93494]: pam_unix(su-l:session): session opened for user hsoadm by (uid=0) 2018-09-13T07:38:01+0000 hso-hana-vm-s2-1 systemd[1]: Started Session c44290 of user hsoadm.
Within the time range that was specified, the current master node **hso-hana-vm-
2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 pacemakerd[28308]: notice: Node hso-hana-vm-s1-0 state is now lost 2018-09-13T07:38:02+0000 hso-hana-vm-s2-1 cib[28310]: notice: Purged 1 peer with id=1 and/or uname=hso-hana-vm-s1-0 from the membership cache 2018-09-13T07:38:03+0000 hso-hana-vm-s2-1 su[93494]: pam_unix(su-l:session): session closed for user hsoadm
-</code></pre>
+```
Another example is the Pacemaker log file on the secondary master, which became the new primary master. This excerpt shows that the status of the killed primary master node was set to **offline**:
-<pre><code>
+```output
Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 3 still member of group stonith-ng (peer=hso-hana-vm-s1-2, counter=5.1) Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 4 still member of group stonith-ng (peer=hso-hana-vm-s2-0, counter=5.2) Sep 13 07:38:02 [4178] hso-hana-vm-s2-0 stonith-ng: info: pcmk_cpg_membership: Node 5 still member of group stonith-ng (peer=hso-hana-vm-s2-1, counter=5.3)
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: pcmk_cpg_membershi
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node hso-hana-vm-s1-0[1] - corosync-cpg is now offline Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: peer_update_callback: Client hso-hana-vm-s1-0/peer now has status [offline] (DC=hso-hana-dm, changed=4000000) Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: pcmk_cpg_membership: Node 2 still member of group crmd (peer=hso-hana-vm-s1-1, counter=5.0)
-</code></pre>
+```
Sep 13 07:38:02 [4184] hso-hana-vm-s2-0 crmd: info: pcmk_cpg_membershi
The following excerpts are from the SAP HANA **global.ini** file on cluster site 2. This example shows the hostname resolution entries for using different networks for SAP HANA internode communication and HSR:
-<pre><code>
+```config
[communication] tcp_keepalive_interval = 20 internal_network = 10.0.2/24 listeninterface = .internal
-</code></pre>
+```
-<pre><code>
+```config
[internal_hostname_resolution] 10.0.2.40 = hso-hana-vm-s2-0 10.0.2.42 = hso-hana-vm-s2-2 10.0.2.41 = hso-hana-vm-s2-1
-</code></pre>
+```
-<pre><code>
+```config
[ha_dr_provider_SAPHanaSR] provider = SAPHanaSR path = /hana/shared/myHooks execution_order = 1
-</code></pre>
+```
-<pre><code>
+```config
[system_replication_communication] listeninterface = .internal
listeninterface = .internal
10.0.1.40 = hso-hana-vm-s2-0 10.0.1.41 = hso-hana-vm-s2-1 10.0.1.42 = hso-hana-vm-s2-2
-</code></pre>
+```
listeninterface = .internal
The cluster solution provides a browser interface that offers a GUI for users who prefer menus and graphics to having all the commands on the shell level. To use the browser interface, replace **\<node\>** with an actual SAP HANA node in the following URL. Then enter the credentials of the cluster (user **cluster**):
-<pre><code>
+```config
https://&ltnode&gt:7630
-</code></pre>
+```
This screenshot shows the cluster dashboard:
sentinel Monitor Automation Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-automation-health.md
For **Playbook was triggered**, you may see the following statuses:
| Error description | Suggested actions | | | -- | | **Could not add task: *\<TaskName>*.**<br>Incident/alert was not found. | Make sure the incident/alert exists and try again. |
-| **Could not modify property: *\<PropertyName>*.**<br> Incident/alert was not found. | Make sure the incident/alert exists and try again. |
+| **Could not modify property: *\<PropertyName>*.**<br>Incident/alert was not found. | Make sure the incident/alert exists and try again. |
+| **Could not modify property: *\<PropertyName>*.**<br>Too many requests, exceeding throttling limits. | |
| **Could not trigger playbook: *\<PlaybookName>*.**<br>Incident/alert was not found. | If the error occurred when trying to trigger a playbook on demand, make sure the incident/alert exists and try again. |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because playbook was not found or because Microsoft Sentinel was missing permissions on it. | Edit the automation rule, find and select the playbook in its new location, and save. Make sure Microsoft Sentinel has [permission to run this playbook](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents). |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because it contains an unsupported trigger type. | Make sure your playbook starts with the [correct Logic Apps trigger](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary): Microsoft Sentinel Incident or Microsoft Sentinel Alert. |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because the subscription is disabled and marked as read-only. Playbooks in this subscription cannot be run until the subscription is re-enabled. | Re-enable the Azure subscription in which the playbook is located. |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because it was disabled. | Enable your playbook, in Microsoft Sentinel in the Active Playbooks tab under Automation, or in the Logic Apps resource page. |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because of invalid template definition. | There is an error in the playbook definition. Go to the Logic Apps designer to fix the issues and save the playbook. |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because access control configuration restricts Microsoft Sentinel. | Logic Apps configurations allow restricting access to trigger the playbook. This restriction is in effect for this playbook. Remove this restriction so Microsoft Sentinel is not blocked. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md?tabs=azure-portal#restrict-access-by-ip-address-range) |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because Microsoft Sentinel is missing permissions to run it. | Microsoft Sentinel requires [permissions to run playbooks](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents). |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because it wasnΓÇÖt migrated to new permissions model. Grant Microsoft Sentinel permissions to run this playbook and resave the rule. | Grant Microsoft Sentinel [permissions to run this playbook](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents) and resave the rule. |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered due to too many requests exceeding workflow throttling limits. | The number of waiting workflow runs has exceeded the maximum allowed limit. Try increasing the value of `'maximumWaitingRuns'` in [trigger concurrency configuration](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs-limit). |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered due to too many requests exceeding throttling limits. | Learn more about [subscription and tenant limits](../azure-resource-manager/management/request-limits-and-throttling.md#subscription-and-tenant-limits). |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because access was forbidden. Managed identity is missing configuration or Logic Apps network restriction has been set. | If the playbook uses managed identity, [make sure the managed identity was assigned with permissions](authenticate-playbooks-to-sentinel.md#authenticate-with-managed-identity). The playbook may have network restriction rules preventing it from being triggered as they block Microsoft Sentinel service. |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because the subscription or resource group was locked. | Remove the lock to allow Microsoft Sentinel trigger playbooks in the locked scope. Learn more about [locked resources](../azure-resource-manager/management/lock-resources.md?tabs=json). |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because caller is missing required playbook-triggering permissions on playbook or Microsoft Sentinel is missing permissions on it. | The user trying to trigger the playbook on demand is missing Logic Apps Contributor role on the playbook or to trigger the playbook. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md?tabs=azure-portal#restrict-access-by-ip-address-range) |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered due to invalid credentials in connection. | [Check the credentials your connection is using](authenticate-playbooks-to-sentinel.md#manage-your-api-connections) in the **API connections** service in the Azure portal. |
-| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook could not be triggered because playbook ARM ID is not valid. | |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Either the playbook was not found, or Microsoft Sentinel was missing permissions on it. | Edit the automation rule, find and select the playbook in its new location, and save. Make sure Microsoft Sentinel has [permission to run this playbook](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Contains an unsupported trigger type. | Make sure your playbook starts with the [correct Logic Apps trigger](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary): Microsoft Sentinel Incident or Microsoft Sentinel Alert. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>The subscription is disabled and marked as read-only. Playbooks in this subscription cannot be run until the subscription is re-enabled. | Re-enable the Azure subscription in which the playbook is located. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>The playbook was disabled. | Enable your playbook, in Microsoft Sentinel in the Active Playbooks tab under Automation, or in the Logic Apps resource page. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Invalid template definition. | There is an error in the playbook definition. Go to the Logic Apps designer to fix the issues and save the playbook. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Access control configuration restricts Microsoft Sentinel. | Logic Apps configurations allow restricting access to trigger the playbook. This restriction is in effect for this playbook. Remove this restriction so Microsoft Sentinel is not blocked. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md?tabs=azure-portal#restrict-access-by-ip-address-range) |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Microsoft Sentinel is missing permissions to run it. | Microsoft Sentinel requires [permissions to run playbooks](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook wasnΓÇÖt migrated to new permissions model. Grant Microsoft Sentinel permissions to run this playbook and resave the rule. | Grant Microsoft Sentinel [permissions to run this playbook](tutorial-respond-threats-playbook.md?tabs=LAC#respond-to-incidents) and resave the rule. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Too many requests, exceeding workflow throttling limits. | The number of waiting workflow runs has exceeded the maximum allowed limit. Try increasing the value of `'maximumWaitingRuns'` in [trigger concurrency configuration](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs-limit). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Too many requests, exceeding throttling limits. | Learn more about [subscription and tenant limits](../azure-resource-manager/management/request-limits-and-throttling.md#subscription-and-tenant-limits). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Access was forbidden. Managed identity is missing configuration or Logic Apps network restriction has been set. | If the playbook uses managed identity, [make sure the managed identity was assigned with permissions](authenticate-playbooks-to-sentinel.md#authenticate-with-managed-identity). The playbook may have network restriction rules preventing it from being triggered as they block Microsoft Sentinel service. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>The subscription or resource group was locked. | Remove the lock to allow Microsoft Sentinel trigger playbooks in the locked scope. Learn more about [locked resources](../azure-resource-manager/management/lock-resources.md?tabs=json). |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Caller is missing required playbook-triggering permissions on playbook, or Microsoft Sentinel is missing permissions on it. | The user trying to trigger the playbook on demand is missing Logic Apps Contributor role on the playbook or to trigger the playbook. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md?tabs=azure-portal#restrict-access-by-ip-address-range) |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Invalid credentials in connection. | [Check the credentials your connection is using](authenticate-playbooks-to-sentinel.md#manage-your-api-connections) in the **API connections** service in the Azure portal. |
+| **Could not trigger playbook: *\<PlaybookName>*.**<br>Playbook ARM ID is not valid. | |
## Get the complete automation picture
sentinel Configure Audit Log Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit-log-rules.md
You use two analytics rules to monitor and analyze your SAP audit log data:
- **SAP - Dynamic Deterministic Audit Log Monitor (PREVIEW)**. Alerts on any SAP audit log events with minimal configuration. You can configure the rule for an even lower false-positive rate. [Learn how to configure the rule](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-for-sap-news-dynamic-sap-security-audit-log/ba-p/3326842). - **SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)**. Alerts on SAP audit log events when anomalies are detected, using machine learning capabilities and with no coding required. [Learn how to configure the rule](#set-up-the-sapdynamic-anomaly-based-audit-log-monitor-alerts-preview-rule-for-anomaly-detection).
-The two [SAP Audit log monitor rules](sap-solution-security-content.md#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) are delivered as ready to run out of the box, and allow for further fine tuning using the [SAP_Dynamic_Audit_Log_Monitor_Configuration and SAP_User_Config watchlists](sap-solution-security-content.md#available-watchlists).
+The two [SAP Audit log monitor rules](sap-solution-security-content.md#monitoring-the-sap-audit-log) are delivered as ready to run out of the box, and allow for further fine tuning using the [SAP_Dynamic_Audit_Log_Monitor_Configuration and SAP_User_Config watchlists](sap-solution-security-content.md#available-watchlists).
## Anomaly detection
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
Title: Configure Microsoft Sentinel solution for SAP® applications description: This article shows you how to configure the deployed Microsoft Sentinel solution for SAP® applications--++ Previously updated : 04/27/2022 Last updated : 03/10/2023 # Configure Microsoft Sentinel solution for SAP® applications
If you need to reenable the Docker container, run this command:
``` docker start sapcon-[SID] ```+
+## Remove the user role and the optional CR installed on your ABAP system
+
+To remove the user role and optional CR imported to your system, import the deletion CR *NPLK900259* into your ABAP system.
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
Title: Deploy SAP Change Requests (CRs) and configure authorization description: This article shows you how to deploy the SAP Change Requests (CRs) necessary to prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems.--++ Previously updated : 04/07/2022 Last updated : 03/10/2023 # Deploy SAP Change Requests and configure authorization
The required authorizations are listed here by log type. Only the authorizations
| S_TABU_NAM | TABLE | SNCSYSACL | | S_TABU_NAM | TABLE | USRACL |
+If needed, you can [remove the user role and the optional CR installed on your ABAP system](deployment-solution-configuration.md#remove-the-user-role-and-the-optional-cr-installed-on-your-abap-system).
-## Remove the user role and the optional CR installed on your ABAP system
+## Verify that the PAHI table (history of system, database, and SAP parameters) is updated at regular intervals
-To remove the user role and optional CR imported to your system, import the deletion CR *NPLK900259* into your ABAP system.
+The SAP PAHI table includes data on the history of the SAP system, the database, and SAP parameters. In some cases, the Microsoft Sentinel solution for SAP® applications can't monitor the SAP PAHI table at regular intervals, due to missing or faulty configuration (see the [SAP note](https://launchpad.support.sap.com/#/notes/12103) with more details on this issue). It's important to update the PAHI table and to monitor it frequently, so that the Microsoft Sentinel solution for SAP® applications can alert on suspicious actions that might happen at any time throughout the day.
+
+> [!NOTE]
+> For optimal results, in your machine's *systemconfig.ini* file, under the `[ABAP Table Selector]` section, enable both the `PAHI_FULL` and the `PAHI_INCREMENTAL` parameters.
+
+**To verify that the PAHI table is updated at regular intervals**:
+
+1. Check whether the `SAP_COLLECTOR_FOR_PERFMONITOR` job, based on the RSCOLL00 program, is scheduled and running hourly, by the DDIC user in the 000 client.
+1. Check whether the `RSHOSTPH`, `RSSTATPH` and `RSDB_PAR` report names are maintained in the TCOLL table.
+ - `RSHOSTPH` report: Reads the operating system kernel parameters and stores this data in the PAHI table.
+ - `RSSTATPH` report: Reads the SAP profile parameters and stores this data in the PAHI table.
+ - `RSDB_PAR` report: Reads the database parameters and stores them in the PAHI table.
+
+If the job exists and is configured correctly, no further steps are needed.
+
+**If the job doesnΓÇÖt exist**:
+
+1. Log in to your SAP system in the 000 client.
+1. Execute the SM36 transaction.
+1. Under **Job Name**, type *SAP_COLLECTOR_FOR_PERFMONITOR*.
+
+ :::image type="content" source="media/preparing-sap/pahi-table-job-name.png" alt-text="Screenshot of adding the job used to monitor the SAP PAHI table.":::
+
+1. Select **Step** and fill in this information:
+ - Under **User**, type *DDIC*.
+ - Under *ABAP Program Name*, type *RSCOLL00*.
+1. Save the configuration.
+
+ :::image type="content" source="media/preparing-sap/pahi-table-define-user.png" alt-text="Screenshot of defining a user for the job used to monitor the SAP PAHI table.":::
+
+1. Select <kbd>F3</kbd> to go back to the previous screen.
+1. Select **Start Condition** to define the start condition.
+1. Select **Immediate** and select the **Periodic job** checkbox.
+
+ :::image type="content" source="media/preparing-sap/pahi-table-periodic-job.png" alt-text="Screenshot of defining the job used to monitor the SAP PAHI table as periodic.":::
+
+1. Select **Period values** and select **Hourly**.
+1. Select **Save** inside the dialog, and then select **Save** at the bottom.
+
+ :::image type="content" source="media/preparing-sap/pahi-table-hourly-job.png" alt-text="Screenshot of defining the job used to monitor the SAP PAHI table as hourly.":::
+
+1. To release the job, select **Save** at the top.
+
+ :::image type="content" source="media/preparing-sap/pahi-table-release-job.png" alt-text="Screenshot of releasing the job used to monitor the SAP PAHI table as hourly.":::
## Next steps
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
SAPAuditLogAnomalies(LearningTime = 14d, DetectingTime=0h, SelectedSystems= dyna
| MaxTime | Time of last event observed| | Score | the anomaly scores as produced by the anomaly model|
-See [Built-in SAP analytics rules for monitoring the SAP audit log](sap-solution-security-content.md#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) for more information.
+See [Built-in SAP analytics rules for monitoring the SAP audit log](sap-solution-security-content.md#monitoring-the-sap-audit-log) for more information.
### SAPAuditLogConfigRecommend The **SAPAuditLogConfigRecommend** is a helper function designed to offer recommendations for the configuration of the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](sap-solution-security-content.md#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview) analytics rule. Learn how to [configure the rules](configure-audit-log-rules.md).
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
Title: Microsoft Sentinel solution for SAP® applications - security content reference | Microsoft Docs
+ Title: Microsoft Sentinel solution for SAP® applications - security content reference
description: Learn about the built-in security content provided by the Microsoft Sentinel solution for SAP® applications.--++ Previously updated : 01/24/2023 Last updated : 03/26/2023 # Microsoft Sentinel solution for SAP® applications: security content reference
For more information, see [Tutorial: Visualize and monitor your data](../monitor
## Built-in analytics rules
-### Built-in SAP analytics rules for monitoring the SAP audit log
+### Monitoring the configuration of static SAP security parameters
+
+To secure the SAP system, SAP has identified security-related parameters that need to be monitored for changes. With the "SAP - (Preview) Sensitive Static Parameter has Changed" rule, the Microsoft Sentinel solution for SAP® applications tracks [over 52 static security-related parameters](sap-suspicious-configuration-security-parameters.md) in the SAP system, which are built into Microsoft Sentinel.
+
+To understand parameter changes in the system, the Microsoft Sentinel solution for SAP® applications uses the parameter history table, which records changes made to system parameters every hour.
+
+The parameters are also reflected in the [SAPSystemParameters watchlist](#systemparameters). This watchlist allows users to add new parameters, disable existing parameters, and modify the values and severities per parameter and system role in production or non-production environments.
+
+When a change is made to one of these parameters, Microsoft Sentinel checks to see if the change is security-related and if the value is set according to the recommended values. If the change is suspected as outside the safe zone, Microsoft Sentinel creates an incident detailing the change, and identifies who made the change.
+
+Review the [list of parameters](sap-suspicious-configuration-security-parameters.md) that this rule monitors.
+
+### Monitoring the SAP audit log
The SAP Audit log data is used across many of the analytics rules of the Microsoft Sentinel solution for SAP® applications. Some analytics rules look for specific events on the log, while others correlate indications from several logs to produce high fidelity alerts and incidents.+ In addition, there are two analytics rules which are designed to accommodate the entire set of standard SAP audit log events (183 different events), and any other custom events you may choose to log using the SAP audit log.
-Both SAP audit log monitoring analytics rules share the same data sources and the same configuration but differ in one critical aspect. While the ΓÇ£SAP - Dynamic Deterministic Audit Log MonitorΓÇ¥ requires deterministic alert thresholds and user exclusion rules, the ΓÇ£SAP - Dynamic Anomaly-based Audit Log Monitor Alerts (PREVIEW)ΓÇ¥ applies additional machine learning algorithms to filter out background noise in an unsupervised manner. For this reason, by default, most event types (or SAP message IDs) of the SAP audit log are being sent to the "Anomaly based" analytics rule, while the easier to define event types are sent to the deterministic analytics rule. This setting, along with other related settings, can be further configured to suit any system conditions.
+Both SAP audit log monitoring analytics rules share the same data sources and the same configuration but differ in one critical aspect. While the "SAP - Dynamic Deterministic Audit Log Monitor" rule requires deterministic alert thresholds and user exclusion rules, the "SAP - Dynamic Anomaly-based Audit Log Monitor Alerts (PREVIEW)" rule applies additional machine learning algorithms to filter out background noise in an unsupervised manner. For this reason, by default, most event types (or SAP message IDs) of the SAP audit log are being sent to the "Anomaly based" analytics rule, while the easier to define event types are sent to the deterministic analytics rule. This setting, along with other related settings, can be further configured to suit any system conditions.
#### SAP - Dynamic Deterministic Audit Log Monitor
Learn more:
The following tables list the built-in [analytics rules](deploy-sap-security-content.md) that are included in the Microsoft Sentinel solution for SAP® applications, deployed from the Microsoft Sentinel Solutions marketplace.
-### Built-in SAP analytics rules for initial access
+### Initial access
| Rule name | Description | Source action | Tactics | | | | | |
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - SPNego Attack** | Identifies SPNego Replay Attack. | **Data sources**: SAPcon - Audit Log | Impact, Lateral Movement | | **SAP - Dialog logon attempt from a privileged user** | Identifies dialog sign-in attempts, with the **AUM** type, by privileged users in an SAP system. For more information, see the [SAPUsersGetPrivileged](sap-solution-log-reference.md#sapusersgetprivileged). | Attempt to sign in from the same IP to several systems or clients within the scheduled time interval<br><br>**Data sources**: SAPcon - Audit Log | Impact, Lateral Movement | | **SAP - Brute force attacks** | Identifies brute force attacks on the SAP system using RFC logons | Attempt to log in from the same IP to several systems/clients within the scheduled time interval using RFC<br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
-| **SAP - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
-| **SAP - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |
+| **SAP - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access |
+| **SAP - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#persistency) |
| **SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement an SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |
-### Built-in SAP analytics rules for data exfiltration
+### Data exfiltration
| Rule name | Description | Source action | Tactics | | | | | |
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - Multiple Spool Output Executions** |Identifies multiple spools for a user within a specific time-range. | Create and run multiple spool jobs of any type by a user. (SP01) <br><br>**Data sources**: SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Credential Access | | **SAP - Sensitive Tables Direct Access By RFC Logon** |Identifies a generic table access by RFC sign in. <br><br> Maintain tables in the [SAP - Sensitive Tables](#tables) watchlist.<br><br> **Note**: Relevant for production systems only. | Open the table contents using SE11/SE16/SE16N.<br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access | | **SAP - Spool Takeover** |Identifies a user printing a spool request that was created by someone else. | Create a spool request using one user, and then output it in using a different user. <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Command and Control |
-| **SAP - Dynamic RFC Destination** | Identifies the execution of RFC using dynamic destinations. <br><br>**Sub-use case**: [Attempts to bypass SAP security mechanisms](#built-in-sap-analytics-rules-for-attempts-to-bypass-sap-security-mechanisms)| Execute an ABAP report that uses dynamic destinations (cl_dynamic_destination). For example, DEMO_RFC_DYNAMIC_DEST. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration |
+| **SAP - Dynamic RFC Destination** | Identifies the execution of RFC using dynamic destinations. <br><br>**Sub-use case**: [Attempts to bypass SAP security mechanisms](#attempts-to-bypass-sap-security-mechanisms)| Execute an ABAP report that uses dynamic destinations (cl_dynamic_destination). For example, DEMO_RFC_DYNAMIC_DEST. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration |
| **SAP - Sensitive Tables Direct Access By Dialog Logon** | Identifies generic table access via dialog sign-in. | Open table contents using `SE11`/`SE16`/`SE16N`. <br><br>**Data sources**: SAPcon - Audit Log | Discovery | | **SAP - (Preview) File Downloaded From a Malicious IP Address** | Identifies download of a file from an SAP system using an IP address known to be malicious. Malicious IP addresses are obtained from [threat intelligence services](../understand-threat-intelligence.md). | Download a file from a malicious IP. <br><br>**Data sources**: SAP security Audit log, Threat Intelligence | Exfiltration | | **SAP - (Preview) Data Exported from a Production System using a Transport** | Identifies data export from a production system using a transport. Transports are used in development systems and are similar to pull requests. This alert rule triggers incidents with medium severity when a transport that includes data from any table is released from a production system. The rule creates a high severity incident when the export includes data from a sensitive table. | Release a transport from a production system. <br><br>**Data sources**: SAP CR log, [SAP - Sensitive Tables](#tables) | Exfiltration |
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - (Preview) High Volume of Potentially Sensitive Data Exported** | Identifies export of a high volume of data via files in proximity to an execution of a sensitive transaction, a sensitive program, or direct access to sensitive table. | Export high volume of data via files. <br><br>**Data sources**: SAP Security Audit Log, [SAP - Sensitive Tables](#tables), [SAP - Sensitive Transactions](#transactions), [SAP - Sensitive Programs](#programs) | Exfiltration |
-### Built-in SAP analytics rules for persistency
+### Persistency
| Rule name | Description | Source action | Tactics | | | | | |
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control | | **SAP - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access | --
-### Built-in SAP analytics rules for attempts to bypass SAP security mechanisms
+### Attempts to bypass SAP security mechanisms
| Rule name | Description | Source action | Tactics | | | | | | | **SAP - Client Configuration Change** | Identifies changes for client configuration such as the client role or the change recording mode. | Perform client configuration changes using the `SCC4` transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Defense Evasion, Exfiltration, Persistence |
-| **SAP - Data has Changed during Debugging Activity** | Identifies changes for runtime data during a debugging activity. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | 1. Activate Debug ("/h"). <br>2. Select a field for change and update its value.<br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement |
+| **SAP - Data has Changed during Debugging Activity** | Identifies changes for runtime data during a debugging activity. <br><br>**Sub-use case**: [Persistency](#persistency) | 1. Activate Debug ("/h"). <br>2. Select a field for change and update its value.<br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement |
| **SAP - Deactivation of Security Audit Log** | Identifies deactivation of the Security Audit Log, | Disable security Audit Log using `SM19/RSAU_CONFIG`. <br><br>**Data sources**: SAPcon - Audit Log | Exfiltration, Defense Evasion, Persistence | | **SAP - Execution of a Sensitive ABAP Program** |Identifies the direct execution of a sensitive ABAP program. <br><br>Maintain ABAP Programs in the [SAP - Sensitive ABAP Programs](#programs) watchlist. | Run a program directly using `SE38`/`SA38`/`SE80`. <br> <br>**Data sources**: SAPcon - Audit Log | Exfiltration, Lateral Movement, Execution | | **SAP - Execution of a Sensitive Transaction Code** | Identifies the execution of a sensitive Transaction Code. <br><br>Maintain transaction codes in the [SAP - Sensitive Transaction Codes](#transactions) watchlist. | Run a sensitive transaction code. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Execution |
-| **SAP - Execution of Sensitive Function Module** | Identifies the execution of a sensitive ABAP function module. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency)<br><br>**Note**: Relevant for production systems only. <br><br>Maintain sensitive functions in the [SAP - Sensitive Function Modules](#modules) watchlist, and make sure to activate table logging changes in the backend for the EUFUNC table. (SE13) | Run a sensitive function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control
+| **SAP - Execution of Sensitive Function Module** | Identifies the execution of a sensitive ABAP function module. <br><br>**Sub-use case**: [Persistency](#persistency)<br><br>**Note**: Relevant for production systems only. <br><br>Maintain sensitive functions in the [SAP - Sensitive Function Modules](#modules) watchlist, and make sure to activate table logging changes in the backend for the EUFUNC table. (SE13) | Run a sensitive function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control
| **SAP - (PREVIEW) HANA DB -Audit Trail Policy Changes** | Identifies changes for HANA DB audit trail policies. | Create or update the existing audit policy in security definitions. <br> <br>**Data sources**: Linux Agent - Syslog | Lateral Movement, Defense Evasion, Persistence | | **SAP - (PREVIEW) HANA DB -Deactivation of Audit Trail** | Identifies the deactivation of the HANA DB audit log. | Deactivate the audit log in the HANA DB security definition. <br><br>**Data sources**: Linux Agent - Syslog | Persistence, Lateral Movement, Defense Evasion | | **SAP - RFC Execution of a Sensitive Function Module** | Sensitive function models to be used in relevant detections. <br><br>Maintain function modules in the [SAP - Sensitive Function Modules](#module) watchlist. | Run a function module using RFC. <br><br>**Data sources**: SAPcon - Audit Log | Execution, Lateral Movement, Discovery | | **SAP - System Configuration Change** | Identifies changes for system configuration. | Adapt system change options or software component modification using the `SE06` transaction code.<br><br>**Data sources**: SAPcon - Audit Log |Exfiltration, Defense Evasion, Persistence |
-| **SAP - Debugging Activities** | Identifies all debugging related activities. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |Activate Debug ("/h") in the system, debug an active process, add breakpoint to source code, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
+| **SAP - Debugging Activities** | Identifies all debugging related activities. <br><br>**Sub-use case**: [Persistency](#persistency) |Activate Debug ("/h") in the system, debug an active process, add breakpoint to source code, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
| **SAP - Security Audit Log Configuration Change** | Identifies changes in the configuration of the Security Audit Log | Change any Security Audit Log Configuration using `SM19`/`RSAU_CONFIG`, such as the filters, status, recording mode, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Exfiltration, Defense Evasion | | **SAP - Transaction is unlocked** |Identifies unlocking of a transaction. | Unlock a transaction code using `SM01`/`SM01_DEV`/`SM01_CUS`. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Execution | | **SAP - Dynamic ABAP Program** | Identifies the execution of dynamic ABAP programming. For example, when ABAP code was dynamically created, changed, or deleted. <br><br> Maintain excluded transaction codes in the [SAP - Transactions for ABAP Generations](#transactions) watchlist. | Create an ABAP Report that uses ABAP program generation commands, such as INSERT REPORT, and then run the report. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control, Impact |
-### Built-in SAP analytics rules for suspicious privileges operations
+### Suspicious privileges operations
| Rule name | Description | Source action | Tactics | | | | | |
The following tables list the built-in [analytics rules](deploy-sap-security-con
| **SAP - Sensitive privileged user logged in** | Identifies the Dialog sign-in of a sensitive privileged user. <br><br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Sign in to the backend system using `SAP*` or another privileged user. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Credential Access | | **SAP - Sensitive privileged user makes a change in other user** | Identifies changes of sensitive, privileged users in other users. | Change user details / authorizations using SU01. <br><br>**Data Sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access | | **SAP - Sensitive Users Password Change and Login** | Identifies password changes for privileged users. | Change the password for a privileged user and sign into the system. <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist.<br><br>**Data sources**: SAPcon - Audit Log | Impact, Command and Control, Privilege Escalation |
-| **SAP - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Create a user using SU01, and then sign in, using the newly created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access |
-| **SAP - User Unlocks and uses other users** | Identifies a user being unlocked and used by other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Unlock a user using SU01, and then sign in using the unlocked user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log, SAPcon - Change Documents Log | Discovery, PreAttack, Initial Access, Lateral Movement |
+| **SAP - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#persistency) | Create a user using SU01, and then sign in, using the newly created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access |
+| **SAP - User Unlocks and uses other users** | Identifies a user being unlocked and used by other users. <br><br>**Sub-use case**: [Persistency](#persistency) | Unlock a user using SU01, and then sign in using the unlocked user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log, SAPcon - Change Documents Log | Discovery, PreAttack, Initial Access, Lateral Movement |
| **SAP - Assignment of a sensitive profile** | Identifies new assignments of a sensitive profile to a user. <br><br>Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Assign a profile to a user using `SU01`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation | | **SAP - Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation | | **SAP - (PREVIEW) Critical authorizations assignment - New Authorization Value** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new authorization object or update an existing one in a role, using `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation |
These watchlists provide the configuration for the Microsoft Sentinel solution f
| <a name="roles"></a>**SAP - Sensitive Roles** | Sensitive roles, where assignment should be governed. <br><br>- **Role**: SAP authorization role, such as `SAP_BC_BASIS_ADMIN` <br>- **Description**: A meaningful role description. | | <a name="transactions"></a>**SAP - Sensitive Transactions** | Sensitive transactions where execution should be governed. <br><br>- **TransactionCode**: SAP transaction code, such as `RZ11` <br>- **Description**: A meaningful code description. | | <a name="systems"></a>**SAP - Systems** | Describes the landscape of SAP systems according to role and usage.<br><br>- **SystemID**: the SAP system ID (SYSID) <br>- **SystemRole**: the SAP system role, one of the following values: `Sandbox`, `Development`, `Quality Assurance`, `Training`, `Production` <br>- **SystemUsage**: The SAP system usage, one of the following values: `ERP`, `BW`, `Solman`, `Gateway`, `Enterprise Portal` |
+| <a name="systemparameters"></a>**SAPSystemParameters** | Parameters to watch for [suspicious configuration changes](#monitoring-the-configuration-of-static-sap-security-parameters). This watchlist is prefilled with recommended values (according to SAP best practice), and you can extend the watchlist to include more parameters. If you don't want to receive alerts for a parameter, set `EnableAlerts` to `false`.<br><br>- **ParameterName**: The name of the parameter.<br>- **Comment**: The SAP standard parameter description.<br>- **EnableAlerts**: Defines whether to enable alerts for this parameter. Values are `true` and `false`.<br>- **Option**: Defines in which case to trigger an alert: If the parameter value is greater or equal (`GE`), less or equal (`LE`), or equal (`EQ`).<br> For example, if the `login/fails_to_user_lock` SAP parameter is set to `LE` (less or equal), and a value of `5`, once Microsoft Sentinel detects a change to this specific parameter, it compares the newly-reported value and the expected value. If the new value is `4`, Microsoft Sentinel doesn't trigger an alert. If the new value is `6`, Microsoft Sentinel triggers an alert.<br>- **ProductionSeverity**: The incident severity for production systems.<br>- **ProductionValues**: Permitted values for production systems.<br>- **NonProdSeverity**: The incident severity for non-production systems.<br>- **NonProdValues**: Permitted values for non-production systems. |
| <a name="users"></a>**SAP - Excluded Users** | System users that are logged in and need to be ignored, such as for the Multiple logons by user alert. <br><br>- **User**: SAP User <br>- **Description**: A meaningful user description | | <a name="networks"></a>**SAP - Excluded Networks** | Maintain internal, excluded networks for ignoring web dispatchers, terminal servers, and so on. <br><br>- **Network**: Network IP address or range, such as `111.68.128.0/17` <br>- **Description**: A meaningful network description | | <a name="modules"></a>**SAP - Obsolete Function Modules** | Obsolete function modules, whose execution should be governed. <br><br>- **FunctionModule**: ABAP Function Module, such as TH_SAPREL <br>- **Description**: A meaningful function module description | | <a name="programs"></a>**SAP - Obsolete Programs** | Obsolete ABAP programs (reports), whose execution should be governed. <br><br>- **ABAPProgram**:ABAP Program, such as TH_ RSPFLDOC <br>- **Description**: A meaningful ABAP program description | | <a name="transactions"></a>**SAP - Transactions for ABAP Generations** | Transactions for ABAP generations whose execution should be governed. <br><br>- **TransactionCode**:Transaction Code, such as SE11. <br>- **Description**: A meaningful Transaction Code description | | <a name="servers"></a>**SAP - FTP Servers** | FTP Servers for identification of unauthorized connections. <br><br>- **Client**:such as 100. <br>- **FTP_Server_Name**: FTP server name, such as `http://contoso.com/` <br>-**FTP_Server_Port**:FTP server port, such as 22. <br>- **Description**A meaningful FTP Server description |
-| <a name="objects"></a>**SAP_Dynamic_Audit_Log_Monitor_Configuration** | Configure the SAP audit log alerts by assigning each message ID a severity level as required by you, per system role (production, non-production). This watchlist details all available SAP standard audit log message IDs. The watchlist can be extended to contain additional message IDs you might create on your own using ABAP enhancements on their SAP NetWeaver systems. This watchlist also allows for configuring a designated team to handle each of the event types, and excluding users by SAP roles, SAP profiles or by tags from the **SAP_User_Config** watchlist. This watchlist is one of the core components used for [configuring](configure-audit-log-rules.md) the [built-in SAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log). <br><br>- **MessageID**: The SAP Message ID, or event type, such as `AUD` (User master record changes), or `AUB` (authorization changes). <br>- **DetailedDescription**: A markdown enabled description to be shown on the incident pane. <br>- **ProductionSeverity**: The desired severity for the incident to be created with for production systems `High`, `Medium`. Can be set as `Disabled`. <br>- **NonProdSeverity**: The desired severity for the incident to be created with for non-production systems `High`, `Medium`. Can be set as `Disabled`. <br>- **ProductionThreshold** The "Per hour" count of events to be considered as suspicious for production systems `60`. <br>- **NonProdThreshold** The "Per hour" count of events to be considered as suspicious for non-production systems `10`. <br>- **RolesTagsToExclude**: This field accepts SAP role name, SAP profile names or tags from the SAP_User_Config watchlist. These are then used to exclude the associated users from specific event types. See options for role tags at the end of this list. <br>- **RuleType**: Use `Deterministic` for the event type to be sent off to the [SAP - Dynamic Deterministic Audit Log Monitor](#sapdynamic-deterministic-audit-log-monitor), or `AnomaliesOnly` to have this event covered by the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview).<br><br>For the **RolesTagsToExclude** field:<br>- If you list SAP roles or [SAP profiles](sap-solution-deploy-alternate.md#configuring-user-master-data-collection), this excludes any user with the listed roles or profiles from these event types for the same SAP system. For example, if you define the `BASIC_BO_USERS` ABAP role for the RFC related event types, Business Objects users won't trigger incidents when making massive RFC calls.<br>- Tagging an event type is similar to specifying SAP roles or profiles, but tags can be created in the workspace, so SOC teams can exclude users by activity without depending on the SAP team. For example, the audit message IDs AUB (authorization changes) and AUD (user master record changes) are assigned the `MassiveAuthChanges` tag. Users assigned this tag are excluded from the checks for these activities. Running the workspace `SAPAuditLogConfigRecommend` function produces a list of recommended tags to be assigned to users, such as `Add the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV using the SAP_User_Config watchlist`.
-| <a name="objects"></a>**SAP_User_Config** | Allows for fine tuning alerts by excluding /including users in specific contexts and is also used for [configuring](configure-audit-log-rules.md) the [built-in SAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log). <br><br> - **SAPUser**: The SAP user <br> - **Tags**: Tags are used to identify users against certain activity. For example Adding the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV will prevent RFC related incidents to be created for this specific user <br>**Other active directory user identifiers** <br>- AD User Identifier <br>- User On-Premises Sid <br>- User Principal Name |
+| <a name="objects"></a>**SAP_Dynamic_Audit_Log_Monitor_Configuration** | Configure the SAP audit log alerts by assigning each message ID a severity level as required by you, per system role (production, non-production). This watchlist details all available SAP standard audit log message IDs. The watchlist can be extended to contain additional message IDs you might create on your own using ABAP enhancements on their SAP NetWeaver systems. This watchlist also allows for configuring a designated team to handle each of the event types, and excluding users by SAP roles, SAP profiles or by tags from the **SAP_User_Config** watchlist. This watchlist is one of the core components used for [configuring](configure-audit-log-rules.md) the [built-in SAP analytics rules for monitoring the SAP audit log](#monitoring-the-sap-audit-log). <br><br>- **MessageID**: The SAP Message ID, or event type, such as `AUD` (User master record changes), or `AUB` (authorization changes). <br>- **DetailedDescription**: A markdown enabled description to be shown on the incident pane. <br>- **ProductionSeverity**: The desired severity for the incident to be created with for production systems `High`, `Medium`. Can be set as `Disabled`. <br>- **NonProdSeverity**: The desired severity for the incident to be created with for non-production systems `High`, `Medium`. Can be set as `Disabled`. <br>- **ProductionThreshold** The "Per hour" count of events to be considered as suspicious for production systems `60`. <br>- **NonProdThreshold** The "Per hour" count of events to be considered as suspicious for non-production systems `10`. <br>- **RolesTagsToExclude**: This field accepts SAP role name, SAP profile names or tags from the SAP_User_Config watchlist. These are then used to exclude the associated users from specific event types. See options for role tags at the end of this list. <br>- **RuleType**: Use `Deterministic` for the event type to be sent off to the [SAP - Dynamic Deterministic Audit Log Monitor](#sapdynamic-deterministic-audit-log-monitor), or `AnomaliesOnly` to have this event covered by the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview).<br><br>For the **RolesTagsToExclude** field:<br>- If you list SAP roles or [SAP profiles](sap-solution-deploy-alternate.md#configuring-user-master-data-collection), this excludes any user with the listed roles or profiles from these event types for the same SAP system. For example, if you define the `BASIC_BO_USERS` ABAP role for the RFC related event types, Business Objects users won't trigger incidents when making massive RFC calls.<br>- Tagging an event type is similar to specifying SAP roles or profiles, but tags can be created in the workspace, so SOC teams can exclude users by activity without depending on the SAP team. For example, the audit message IDs AUB (authorization changes) and AUD (user master record changes) are assigned the `MassiveAuthChanges` tag. Users assigned this tag are excluded from the checks for these activities. Running the workspace `SAPAuditLogConfigRecommend` function produces a list of recommended tags to be assigned to users, such as `Add the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV using the SAP_User_Config watchlist`.
+| <a name="objects"></a>**SAP_User_Config** | Allows for fine tuning alerts by excluding /including users in specific contexts and is also used for [configuring](configure-audit-log-rules.md) the [built-in SAP analytics rules for monitoring the SAP audit log](#monitoring-the-sap-audit-log). <br><br> - **SAPUser**: The SAP user <br> - **Tags**: Tags are used to identify users against certain activity. For example Adding the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV will prevent RFC related incidents to be created for this specific user <br>**Other active directory user identifiers** <br>- AD User Identifier <br>- User On-Premises Sid <br>- User Principal Name |
## Next steps
sentinel Sap Suspicious Configuration Security Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-suspicious-configuration-security-parameters.md
+
+ Title: SAP security parameters monitored by the Microsoft Sentinel solution for SAP® to detect suspicious configuration changes
+description: Learn about the security parameters in the SAP system that the Microsoft Sentinel solution for SAP® applications monitors for suspicious configuration changes.
+++ Last updated : 03/26/2023++
+# Monitored SAP security parameters for detecting suspicious configuration changes
+
+This article details the security parameters in the SAP system that the Microsoft Sentinel solution for SAP® applications monitors as part of the ["SAP - (Preview) Sensitive Static Parameter has Changed" analytics rule](sap-solution-security-content.md#monitoring-the-configuration-of-static-sap-security-parameters).
+
+The Microsoft Sentinel solution for SAP® applications will provide updates for this content according to SAP best practice changes. You can also add parameters to watch for, change values according to your organization's needs, and disable specific parameters in the [SAPSystemParameters watchlist](sap-solution-security-content.md#systemparameters).
+
+## Monitored static SAP security parameters
+
+This list includes the static SAP security parameters that the Microsoft Sentinel solution for SAP® applications monitors to protect your SAP system. The list isn't a recommendation for configuring these parameters. For configuration considerations, consult your SAP admins.
+
+|Parameter |Description |Security value/considerations |
+||||
+|gw/accept_remote_trace_level |Controls whether or not the Central Process Integration (CPI) and Remote Function Call (RFC) subsystems adopt the remote trace level. When this parameter is set to `1`, the CPI and RFC subsystems accept and adopt the remote trace levels. When set to `0`, remote trace levels aren't accepted and the local trace level is used instead.<br><br>The trace level is a setting that determines the level of detail recorded in the system log for a specific program or process. When the subsystems adopt the trace levels, you can set the trace level for a program or process from a remote system and not only from the local system. This setting can be useful in situations where remote debugging or troubleshooting is required. |The parameter can be configured to restrict the trace level accepted from external systems. Setting a lower trace level may reduce the amount of information that external systems can obtain about the internal workings of the SAP system. |
+|login/password_change_for_SSO |Controls how password changes are enforced in single sign-on situations. |High, because enforcing password changes can help prevent unauthorized access to the system by attackers who may have obtained valid credentials through phishing or other means. |
+|icm/accept_remote_trace_level |Determines whether the Internet Communication Manager (ICM) accepts remote trace level changes from external systems. |Medium, because allowing remote trace level changes can provide valuable diagnostic information to attackers and potentially compromise system security. |
+|rdisp/gui_auto_logout |Specifies the maximum idle time for SAP GUI connections before automatically logging out the user. | High, because automatically logging out inactive users can help prevent unauthorized access to the system by attackers who may have gained access to a user's workstation. |
+|rsau/enable |Controls whether the Security Audit log is enabled. |High, because the Security Audit log can provide valuable information for detecting and investigating security incidents. |
+|login/min_password_diff |Specifies the minimum number of characters that must differ between the old and new password when users change their passwords. |High, because requiring a minimum number of character differences can help prevent users from choosing weak passwords that can easily be guessed. |
+|login/min_password_digits |Sets the minimum number of digits required in a password for a user. |High, because the parameter increases the complexity of passwords and makes them harder to guess or crack. |
+|login/ticket_only_by_https |This parameter controls whether authentication tickets are only sent via HTTPS or can be sent via HTTP as well. |High, because using HTTPS for ticket transmission encrypts the data in transit, making it more secure. |
+|auth/rfc_authority_check |Controls whether authority checks are performed for RFCs. |High, because enabling this parameter helps prevent unauthorized access to sensitive data and functions via RFCs. |
+|gw/acl_mode |Sets the mode for the access control list (ACL) file used by the SAP gateway. |High, because the parameter controls access to the gateway and helps prevent unauthorized access to the SAP system. |
+|gw/logging |Controls the logging settings for the SAP gateway. |High, because this parameter can be used to monitor and detect suspicious activity or potential security breaches. |
+|login/fails_to_session_end |Sets the number of invalid login attempts allowed before the user's session is terminated. |High, because the parameter helps prevent brute-force attacks on user accounts. |
+|wdisp/ssl_encrypt |Sets the mode for SSL re-encryption of HTTP requests. |High, because this parameter ensures that data transmitted over HTTP is encrypted, which helps prevent eavesdropping and data tampering. |
+|login/no_automatic_user_sapstar |Controls the automatic login of the SAP* user. |High, because this parameter helps prevent unauthorized access to the SAP system via the default SAP* account. |
+|rsau/max_diskspace/local |Defines the maximum amount of disk space that can be used for local storage of audit logs. This security parameter helps to prevent the filling up of disk space and ensures that audit logs are available for investigation. |Setting an appropriate value for this parameter helps prevent the local audit logs from consuming too much disk space, which could lead to system performance issues or even denial of service attacks. On the other hand, setting a value that's too low may result in the loss of audit log data, which may be required for compliance and auditing. |
+|snc/extid_login_diag |Enables or disables the logging of external ID in Secure Network Communication (SNC) logon errors. This security parameter can help identify attempts of unauthorized access to the system. |Enabling this parameter can be helpful for troubleshooting SNC-related issues, because it provides additional diagnostic information. However, the parameter may also expose sensitive information about the external security products used by the system, which could be a potential security risk if that information falls into the wrong hands. |
+|login/password_change_waittime |Defines the number of days a user must wait before changing their password again. This security parameter helps enforce password policies and ensure that users change their passwords periodically. |Setting an appropriate value for this parameter can help ensure that users change their passwords regularly enough to maintain the security of the SAP system. At the same time, setting the wait time too short can be counterproductive because users may be more likely to reuse passwords or choose weak passwords that are easier to remember. |
+|snc/accept_insecure_cpic |Determines whether or not the system accepts insecure SNC connections using the CPIC protocol. This security parameter controls the level of security for SNC connections. |Enabling this parameter can increase the risk of data interception or manipulation, because it accepts SNC-protected connections that don't meet the minimum security standards. Therefore, the recommended security value for this parameter is to set it to `0`, which means that only SNC connections that meet the minimum security requirements are accepted. |
+|snc/accept_insecure_r3int_rfc |Determines whether or not the system accepts insecure SNC connections for R/3 and RFC protocols. This security parameter controls the level of security for SNC connections. |Enabling this parameter can increase the risk of data interception or manipulation, because it accepts SNC-protected connections that don't meet the minimum security standards. Therefore, the recommended security value for this parameter is to set it to `0`, which means that only SNC connections that meet the minimum security requirements are accepted. |
+|snc/accept_insecure_rfc |Determines whether or not the system accepts insecure SNC connections using RFC protocols. This security parameter controls the level of security for SNC connections. |Enabling this parameter can increase the risk of data interception or manipulation, because it accepts SNC-protected connections that don't meet the minimum security standards. Therefore, the recommended security value for this parameter is to set it to `0`, which means that only SNC connections that meet the minimum security requirements are accepted. |
+|snc/data_protection/max |Defines the maximum level of data protection for SNC connections. This security parameter controls the level of encryption used for SNC connections. |Setting a high value for this parameter can increase the level of data protection and reduce the risk of data interception or manipulation. The recommended security value for this parameter depends on the organization's specific security requirements and risk management strategy. |
+|rspo/auth/pagelimit |Defines the maximum number of spool requests that a user can display or delete at once. This security parameter helps to prevent denial-of-service attacks on the spool system. |This parameter doesn't directly affect the security of the SAP system, but can help to prevent unauthorized access to sensitive authorization data. By limiting the number of entries displayed per page, it can reduce the risk of unauthorized individuals viewing sensitive authorization information. |
+|snc/accept_insecure_gui |Determines whether or not the system accepts insecure SNC connections using the GUI. This security parameter controls the level of security for SNC connections. |Setting the value of this parameter to `0` is recommended to ensure that SNC connections made through the SAP GUI are secure, and to reduce the risk of unauthorized access or interception of sensitive data. Allowing insecure SNC connections may increase the risk of unauthorized access to sensitive information or data interception, and should only be done when there is a specific need and the risks have been properly assessed. |
+|login/accept_sso2_ticket |Enables or disables the acceptance of SSO2 tickets for logon. This security parameter controls the level of security for logon to the system. |Enabling SSO2 can provide a more streamlined and convenient user experience, but also introduces additional security risks. If an attacker gains access to a valid SSO2 ticket, they may be able to impersonate a legitimate user and gain unauthorized access to sensitive data or perform malicious actions. |
+|login/multi_login_users |Defines whether or not multiple logon sessions are allowed for the same user. This security parameter controls the level of security for user sessions and helps prevent unauthorized access. |Enabling this parameter can help prevent unauthorized access to SAP systems by limiting the number of concurrent logins for a single user. When this parameter is set to `0`, only one login session is allowed per user, and additional login attempts are rejected. This can help prevent unauthorized access to SAP systems in case a user's login credentials are compromised or shared with others. |
+|login/password_expiration_time |Specifies the maximum time interval in days for which a password is valid. When this time elapses, the user is prompted to change their password. |Setting this parameter to a lower value can improve security by ensuring that passwords are changed frequently. |
+|login/password_max_idle_initial |Specifies the maximum time interval in minutes for which a user can remain logged on without performing any activity. After this time elapses, the user is automatically logged off. |Setting a lower value for this parameter can improve security by ensuring that idle sessions aren't left open for extended periods of time. |
+|login/password_history_size |Specifies the number of previous passwords that a user isn't allowed to reuse. |This parameter prevents users from repeatedly using the same passwords, which can improve security. |
+|snc/data_protection/use |Enables the use of SNC data protection. When enabled, SNC ensures that all data transmitted between SAP systems is encrypted and secure. | |
+|rsau/max_diskspace/per_day |Specifies the maximum amount of disk space in MB that can be used for audit logs per day. Setting a lower value for this parameter can help ensure that audit logs don't consume too much disk space and can be managed effectively. | |
+|snc/enable |Enables SNC for communication between SAP systems. |When enabled, SNC provides an extra layer of security by encrypting data transmitted between systems. |
+|auth/no_check_in_some_cases |Disables authorization checks in certain cases. |While this parameter may improve performance, it can also pose a security risk by allowing users to perform actions they may not have permission for. |
+|auth/object_disabling_active |Disables specific authorization objects for user accounts that have been inactive for a specified period of time. |Can help improve security by reducing the number of inactive accounts with unnecessary permissions. |
+|login/disable_multi_gui_login |Prevents a user from being logged in to multiple GUI sessions simultaneously. |This parameter can help improve security by ensuring that users are only logged in to one session at a time. |
+|login/min_password_lng |Specifies the minimum length that a password can be. |Setting a higher value for this parameter can improve security by ensuring that passwords aren't easily guessed. |
+|rfc/reject_expired_passwd |Prevents the execution of RFCs when the user's password has expired. |Enabling this parameter can be helpful when enforcing password policies and preventing unauthorized access to SAP systems. When this parameter is set to `1`, RFC connections are rejected if the user's password has expired, and the user is prompted to change their password before they can connect. This helps ensure that only authorized users with valid passwords can access the system. |
+|rsau/max_diskspace/per_file |Sets the maximum size of an audit file that SAP system auditing can create. Setting a lower value helps prevent excessive growth of audit files and thus helps ensure adequate disk space. |Setting an appropriate value helps manage the size of audit files and avoid storage issues. |
+|login/min_password_letters |Specifies the minimum number of letters that must be included in a user's password. Setting a higher value helps increase password strength and security. |Setting an appropriate value helps enforce password policies and improve password security. |
+|rsau/selection_slots |Sets the number of selection slots that can be used for audit files. Setting a higher value can help to avoid overwriting of older audit files. |Helps ensure that audit files are retained for a longer period of time, which can be useful in a security breach. |
+|gw/sim_mode |This parameter sets the gateway's simulation mode. When enabled, the gateway only simulates communication with the target system, and no actual communication takes place. |Enabling this parameter can be useful for testing purposes and can help prevent any unintended changes to the target system. |
+|login/fails_to_user_lock |Sets the number of failed login attempts after which the user account gets locked. Setting a lower value helps prevent brute force attacks. |Helps prevent unauthorized access to the system and helps protect user accounts from being compromised. |
+|login/password_compliance_to_current_policy |Enforces the compliance of new passwords with the current password policy of the system. Its value should be set to `1` to enable this feature. |High. Enabling this parameter can help ensure that users comply with the current password policy when changing passwords, which reduces the risk of unauthorized access to SAP systems. When this parameter is set to `1`, users are prompted to comply with the current password policy when changing their passwords. |
+|rfc/ext_debugging |Enables the RFC debugging mode for external RFC calls. Its value should be set to `0` to disable this feature. | |
+|gw/monitor |Enables monitoring of gateway connections. Its value should be set to `1` to enable this feature. | |
+|login/create_sso2_ticket |Enables the creation of SSO2 tickets for users. Its value should be set to `1` to enable this feature. | |
+|login/failed_user_auto_unlock |Enables automatic unlocking of user accounts after a failed login attempt. Its value should be set to `1` to enable this feature. | |
+|login/min_password_uppercase |Sets the minimum number of uppercase letters required in new passwords. Its value should be set to a positive integer. | |
+|login/min_password_specials |Sets the minimum number of special characters required in new passwords. Its value should be set to a positive integer. | |
+|snc/extid_login_rfc |Enables the use of SNC for external RFC calls. Its value should be set to `1` to enable this feature. | |
+|login/min_password_lowercase |Sets the minimum number of lowercase letters required in new passwords. Its value should be set to a positive integer.
+|login/password_downwards_compatibility |Allows passwords to be set using old hashing algorithms for backwards compatibility with older systems. Its value should be set to `0` to disable this feature. | |
+|snc/data_protection/min |Sets the minimum level of data protection that must be used for SNC-protected connections. Its value should be set to a positive integer. |Setting an appropriate value for this parameter helps ensure that SNC-protected connections provide a minimum level of data protection. This setting helps prevent sensitive information from being intercepted or manipulated by attackers. The value of this parameter should be set based on the security requirements of the SAP system and the sensitivity of the data transmitted over SNC-protected connections. |
+
+## Next steps
+
+For more information, see:
+
+- [Deploying Microsoft Sentinel solution for SAP® applications](deployment-overview.md)
+- [SAP solution security content](sap-solution-security-content.md)
+- [Microsoft Sentinel solution for SAP® applications logs reference](sap-solution-log-reference.md)
+- [Deploy the Microsoft Sentinel solution for SAP® applications data connector with SNC](configure-snc.md)
+- [Configuration file reference](configuration-file-reference.md)
+- [Prerequisites for deploying the Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Troubleshooting your Microsoft Sentinel solution for SAP® applications deployment](sap-deploy-troubleshoot.md)
site-recovery Vmware Azure Install Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-mobility-service.md
On each Linux machine that you want to protect, do the following:
11. On the **Manage Accounts** tab, select **Add Account**. 12. Add the account you created. 13. Enter the credentials you use when you enable replication for a computer.
-1. Additional step for updating or protecting SUSE Linux Enterprise Server 11 SP3 OR RHEL 5 or CentOS 5 or Debian 7 machines. [Ensure the latest version is available in the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server).
+1. Additional step for updating or protecting SUSE Linux Enterprise Server 11 SP3 OR RHEL 5 or CentOS 5 or Debian 7 machines. [Ensure the latest version is available in the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server).
## Anti-virus on replicated machines
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Disaster recovery of physical servers | Replication of on-premises Windows/Linux
**Server** | **Requirements** | **Details** | |
-vCenter Server | Version 8.0, Version 7.0 & subsequent updates in this version, 6.7, 6.5, 6.0, or 5.5 | We recommend that you use a vCenter server in your disaster recovery deployment.
-vSphere hosts | Version 7.0 & subsequent updates in this version, 6.7, 6.5, 6.0, or 5.5 | We recommend that vSphere hosts and vCenter servers are located in the same network as the process server. By default the process server runs on the configuration server. [Learn more](vmware-physical-azure-config-process-server-overview.md).
+vCenter Server | Version 8.0 & subsequent updates in this version, Version 7.0, 6.7, 6.5, 6.0, or 5.5 | We recommend that you use a vCenter server in your disaster recovery deployment.
+vSphere hosts | Version 8.0 & subsequent updates in this version, Version 7.0, 6.7, 6.5, 6.0, or 5.5 | We recommend that vSphere hosts and vCenter servers are located in the same network as the process server. By default the process server runs on the configuration server. [Learn more](vmware-physical-azure-config-process-server-overview.md).
## Site Recovery configuration server
Linux | Only 64-bit system is supported. 32-bit system isn't supported.<br/><br/
Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), 8.7 <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or later), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6, 8.7 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> Ubuntu 22.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions)
-Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions); Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 is not supported.), Debian 10, Debian 11 [(Review supported kernel versions)](#debian-kernel-versions).
-SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1, SP2, SP3, SP4 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 is not supported. To upgrade, disable replication and re-enable after the upgrade. <br/>|
+Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 is not supported.). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 10, Debian 11 [(Review supported kernel versions)](#debian-kernel-versions).
+SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1, SP2, SP3, SP4 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 is not supported. To upgrade, disable replication and re-enable after the upgrade. <br/>|
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409/), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6 <br/><br/> Running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4 & 5 (UEK3, UEK4, UEK5)<br/><br/>8.1<br/>Running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/). > [!NOTE]
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
You set up mobility agent on your server when you use Azure Site Recovery for di
## Update mobility service from Azure portal 1. Before you start ensure that the configuration server, scale-out process servers, and any master target servers that are a part of your deployment are updated before you update the Mobility Service on protected machines.
- 1. From 9.36 version onwards, for SUSE Linux Enterprise Server 11 SP3, RHEL 5, CentOS 5, Debian 7 ensure the latest installer is [available on the configuration server and scale-out process server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server).
+ 1. From 9.36 version onwards, for SUSE Linux Enterprise Server 11 SP3, RHEL 5, CentOS 5, Debian 7 ensure the latest installer is [available on the configuration server and scale-out process server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server).
1. In the portal open the vault > **Replicated items**. 1. If the configuration server is the latest version, you see a notification that reads "New Site recovery replication agent update is available. Click to install."
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Push installation is an integral part of the job that's run from the Azure porta
- Ensure that all push installation [prerequisites](vmware-azure-install-mobility-service.md) are met. - Ensure that all server configurations meet the criteria in the [Support matrix for disaster recovery of VMware VMs and physical servers to Azure](vmware-physical-azure-support-matrix.md).-- From 9.36 version onwards, ensure the latest installer for SUSE Linux Enterprise Server 11 SP3, SUSE Linux Enterprise Server 11 SP4, RHEL 5, CentOS 5, Debian 7, Debian 8, Ubunut 14.04 is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server).
+- From 9.36 version onwards, ensure the latest installer for SUSE Linux Enterprise Server 11 SP3, SUSE Linux Enterprise Server 11 SP4, RHEL 5, CentOS 5, Debian 7, Debian 8, Ubunut 14.04 is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server).
+- From 9.52 version onwards, ensure the latest installer for Debian 9, is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server).
The push installation workflow is described in the following sections:
Installer file | Operating system (64-bit only)
`Microsoft-ASR_UA_version_RHEL8-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 8 </br> CentOS 8 `Microsoft-ASR_UA_version_SLES12-64_GA_date_release.tar.gz` | SUSE Linux Enterprise Server 12 SP1 </br> Includes SP2 and SP3. [To be downloaded and placed in this folder manually](#suse-11-sp3-or-suse-11-sp4-server) | SUSE Linux Enterprise Server 11 SP3
-`Microsoft-ASR_UA_version_SLES11-SP4-64_GA_date_release.tar.gz` | SUSE Linux Enterprise Server 11 SP4
+[To be downloaded and placed in this folder manually](#suse-11-sp3-or-suse-11-sp4-server) | SUSE Linux Enterprise Server 11 SP4
`Microsoft-ASR_UA_version_SLES15-64_GA_date_release.tar.gz` | SUSE Linux Enterprise Server 15 `Microsoft-ASR_UA_version_OL6-64_GA_date_release.tar.gz` | Oracle Enterprise Linux 6.4 </br> Oracle Enterprise Linux 6.5 `Microsoft-ASR_UA_version_OL7-64_GA_date_release.tar.gz` | Oracle Enterprise Linux 7 `Microsoft-ASR_UA_version_OL8-64_GA_date_release.tar.gz` | Oracle Enterprise Linux 8
-`Microsoft-ASR_UA_version_UBUNTU-14.04-64_GA_date_release.tar.gz` | Ubuntu Linux 14.04
+[To be downloaded and placed in this folder manually](#ubuntu-1404-server) | Ubuntu Linux 14.04
`Microsoft-ASR_UA_version_UBUNTU-16.04-64_GA_date_release.tar.gz` | Ubuntu Linux 16.04 LTS server `Microsoft-ASR_UA_version_UBUNTU-18.04-64_GA_date_release.tar.gz` | Ubuntu Linux 18.04 LTS server `Microsoft-ASR_UA_version_UBUNTU-20.04-64_GA_date_release.tar.gz` | Ubuntu Linux 20.04 LTS server
-[To be downloaded and placed in this folder manually](#debian-7-or-debian-8-server) | Debian 7
-`Microsoft-ASR_UA_version_DEBIAN8-64_GA_date_release.tar.gz` | Debian 8
-`Microsoft-ASR_UA_version_DEBIAN9-64_GA_date_release.tar.gz` | Debian 9
+[To be downloaded and placed in this folder manually](#debian-7-debian-8-or-debian-9-server) | Debian 7
+[To be downloaded and placed in this folder manually](#debian-7-debian-8-or-debian-9-server) | Debian 8
+[To be downloaded and placed in this folder manually](#debian-7-debian-8-or-debian-9-server) | Debian 9
-## Download latest mobility agent installer for SUSE 11 SP3, SUSE 11 SP4, RHEL 5, Cent OS 5, Debian 7, Debian 8, Oracle Linux 6 and Ubuntu 14.04 server
+## Download latest mobility agent installer for SUSE 11 SP3, SUSE 11 SP4, RHEL 5, Cent OS 5, Debian 7, Debian 8, Debian 9, Oracle Linux 6 and Ubuntu 14.04 server
### SUSE 11 SP3 or SUSE 11 SP4 server
As a **prerequisite to update or protect RHEL 5 machines** from 9.36 version onw
1. **For example**, if install path is C:\Program Files (x86)\Microsoft Azure Site Recovery, then the above mentioned directories will be 1. C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository
-## Debian 7 or Debian 8 server
+## Debian 7, Debian 8 or Debian 9 server
-As a **prerequisite to update or protect Debian 7 or Debian 8 machines** from 9.36 version onwards:
+As a **prerequisite to update or protect Debian 7, Debian 8 or Debian 9 machines**:
1. Ensure latest mobility agent installer is downloaded from Microsoft Download Center and placed in push installer repository on configuration server and all scale out process servers
-2. [Download](site-recovery-whats-new.md) the latest Debian 7 or Debian 8 agent installer.
-3. Navigate to Configuration server, copy the Debian 7 or Debian 8 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository
+2. [Download](site-recovery-whats-new.md) the latest Debian 7, Debian 8 or Debian 9 agent installer.
+3. Navigate to Configuration server, copy the Debian 7, Debian 8 or Debian 9 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository
1. After copying the latest installer, restart InMage PushInstall service. 1. Now, navigate to associated scale-out process servers, repeat step 3 and step 4. 1. **For example**, if install path is C:\Program Files (x86)\Microsoft Azure Site Recovery, then the above mentioned directories will be
storage Data Lake Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-introduction.md
Previously updated : 03/09/2023 Last updated : 03/29/2023
Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data anal
Data Lake Storage Gen2 converges the capabilities of [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml) with Azure Blob Storage. For example, Data Lake Storage Gen2 provides file system semantics, file-level security, and scale. Because these capabilities are built on Blob storage, you also get low-cost, tiered storage, with high availability/disaster recovery capabilities.
-## Designed for enterprise big data analytics
- Data Lake Storage Gen2 makes Azure Storage the foundation for building enterprise data lakes on Azure. Designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of throughput, Data Lake Storage Gen2 allows you to easily manage massive amounts of data.
-A fundamental part of Data Lake Storage Gen2 is the addition of a [hierarchical namespace](data-lake-storage-namespace.md) to Blob storage. The hierarchical namespace organizes objects/files into a hierarchy of directories for efficient data access. A common object store naming convention uses slashes in the name to mimic a hierarchical directory structure. This structure becomes real with Data Lake Storage Gen2. Operations such as renaming or deleting a directory, become single atomic metadata operations on the directory. There's no need to enumerate and process all objects that share the name prefix of the directory.
+## What is a Data Lake?
+
+A _data lake_ is a single, centralized repository where you can store all your data, both structured and unstructured. A data lake enables your organization to quickly and more easily store, access, and analyze a wide variety of data in a single location. With a data lake, you don't need to conform your data to fit an existing structure. Instead, you can store your data in its raw or native format, usually as files or as binary large objects (blobs).
+
+_Azure Data Lake Storage_ is a cloud-based, enterprise data lake solution. It's engineered to store massive amounts of data in any format, and to facilitate big data analytical workloads. You use it to capture data of any type and ingestion speed in a single location for easy access and analysis using various frameworks.
+
+## Data Lake Storage Gen2
+
+_Azure Data Lake Storage Gen2_ refers to the current implementation of Azure's Data Lake Storage solution. The previous implementation, _Azure Data Lake Storage Gen1_ will be retired on February 29, 2024.
+
+Unlike Data Lake Storage Gen1, Data Lake Storage Gen2 isn't a dedicated service or account type. Instead, it's implemented as a set of capabilities that you use with the Blob Storage service of your Azure Storage account. You can unlock these capabilities by enabling the hierarchical namespace setting.
+
+Data Lake Storage Gen2 includes the following capabilities.
-Data Lake Storage Gen2 builds on Blob storage and enhances performance, management, and security in the following ways:
+&#x2713;&nbsp;&nbsp; Hadoop-compatible access
-- **Performance** is optimized because you don't need to copy or transform data as a prerequisite for analysis. Compared to the flat namespace on Blob storage, the hierarchical namespace greatly improves the performance of directory management operations, which improves overall job performance.
+&#x2713;&nbsp;&nbsp; Hierarchical directory structure
-- **Management** is easier because you can organize and manipulate files through directories and subdirectories.
+&#x2713;&nbsp;&nbsp; Optimized cost and performance
-- **Security** is enforceable because you can define POSIX permissions on directories or individual files.
+&#x2713;&nbsp;&nbsp; Finer grain security model
-Also, Data Lake Storage Gen2 is very cost effective because it's built on top of the low-cost [Azure Blob Storage](storage-blobs-introduction.md). The extra features further lower the total cost of ownership for running big data analytics on Azure.
+&#x2713;&nbsp;&nbsp; Massive scalability
-## Key features of Data Lake Storage Gen2
+#### Hadoop-compatible access
-- **Hadoop compatible access:** Data Lake Storage Gen2 allows you to manage and access data just as you would with a [Hadoop Distributed File System (HDFS)](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html). The [ABFS driver](data-lake-storage-abfs-driver.md) (used to access data) is available within all Apache Hadoop environments. These environments include [Azure HDInsight](../../hdinsight/index.yml)*,* [Azure Databricks](/azure/databricks/), and [Azure Synapse Analytics](../../synapse-analytics/index.yml).
+Azure Data Lake Storage Gen2 is primarily designed to work with Hadoop and all frameworks that use the Apache [Hadoop Distributed File System (HDFS)](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) as their data access layer. Hadoop distributions include the [Azure Blob File System (ABFS)](data-lake-storage-abfs-driver.md) driver, which enables many applications and frameworks to access Azure Blob Storage data directly. The ABFS driver is [optimized specifically](data-lake-storage-abfs-driver.md) for big data analytics. The corresponding REST APIs are surfaced through the endpoint `dfs.core.windows.net`.
-- **A superset of POSIX permissions:** The security model for Data Lake Gen2 supports ACL and POSIX permissions along with some extra granularity specific to Data Lake Storage Gen2. Settings can be configured by using Storage Explorer, the Azure portal, PowerShell, Azure CLI, REST APIs, Azure Storage SDKs, or by using frameworks like Hive and Spark.
+Data analysis frameworks that use HDFS as their data access layer can directly access Azure Data Lake Storage Gen2 data through ABFS. The Apache Spark analytics engine and the Presto SQL query engine are examples of such frameworks.
-- **Cost-effective:** Data Lake Storage Gen2 offers low-cost storage capacity and transactions. Features such as [Azure Blob Storage lifecycle](./lifecycle-management-overview.md) optimize costs as data transitions through its lifecycle.
+For more information about supported services and platforms, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md) and [Open source platforms that support Azure Data Lake Storage Gen2](data-lake-storage-supported-open-source-platforms.md).
-- **Optimized driver:** The ABFS driver is [optimized specifically](data-lake-storage-abfs-driver.md) for big data analytics. The corresponding REST APIs are surfaced through the endpoint `dfs.core.windows.net`.
+#### Hierarchical directory structure
-### Scalability
+The [hierarchical namespace](data-lake-storage-namespace.md) is a key feature that enables Azure Data Lake Storage Gen2 to provide high-performance data access at object storage scale and price. You can use this feature to organize all the objects and files within your storage account into a hierarchy of directories and nested subdirectories. In other words, your Azure Data Lake Storage Gen2 data is organized in much the same way that files are organized on your computer.
-Azure Storage is scalable by design whether you access via Data Lake Storage Gen2 or Blob storage interfaces. It's able to store and serve *many exabytes of data*. This amount of storage is available with throughput measured in gigabits per second (Gbps) at high levels of input/output operations per second (IOPS). Processing is executed at near-constant per-request latencies that are measured at the service, account, and file levels.
+Operations such as renaming or deleting a directory, become single atomic metadata operations on the directory. There's no need to enumerate and process all objects that share the name prefix of the directory.
-### Cost effectiveness
+#### Optimized cost and performance
-Because Data Lake Storage Gen2 is built on top of Azure Blob Storage, storage capacity and transaction costs are lower. Unlike other cloud storage services, you don't have to move or transform your data before you can analyze it. For more information about pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage).
+Azure Data Lake Storage Gen2 is priced at Azure Blob Storage levels. It builds on Azure Blob Storage capabilities such as automated lifecycle policy management and object level tiering to manage big data storage costs.
-Additionally, features such as the [hierarchical namespace](data-lake-storage-namespace.md) significantly improve the overall performance of many analytics jobs. This improvement in performance means that you require less compute power to process the same amount of data, resulting in a lower total cost of ownership (TCO) for the end-to-end analytics job.
+Performance is optimized because you don't need to copy or transform data as a prerequisite for analysis. The hierarchical namespace capability of Azure Data Lake Storage allows for efficient access and navigation. This architecture means that data processing requires fewer computational resources, reducing both the speed and cost of accessing data.
-### One service, multiple concepts
+#### Finer grain security model
-Because Data Lake Storage Gen2 is built on top of Azure Blob Storage, multiple concepts can describe the same, shared things.
+The Azure Data Lake Storage Gen2 access control model supports both Azure role-based access control (Azure RBAC) and Portable Operating System Interface for UNIX (POSIX) access control lists (ACLs). There are also a few extra security settings that are specific to Azure Data Lake Storage Gen2. You can set permissions either at the directory level or at the file level. All stored data is encrypted at rest by using either Microsoft-managed or customer-managed encryption keys.
-The following are the equivalent entities, as described by different concepts. Unless specified otherwise these entities are directly synonymous:
+#### Massive scalability
-| Concept | Top Level Organization | Lower Level Organization | Data Container |
-|-|||-|
-| Blobs - General purpose object storage | Container | Virtual directory (SDK only - doesn't provide atomic manipulation) | Blob |
-| Azure Data Lake Storage Gen2 - Analytics Storage | Container | Directory | File |
+Azure Data Lake Storage Gen2 offers massive storage and accepts numerous data types for analytics. It doesn't impose any limits on account sizes, file sizes, or the amount of data that can be stored in the data lake. Individual files can have sizes that range from a few kilobytes (KBs) to a few petabytes (PBs). Processing is executed at near-constant per-request latencies that are measured at the service, account, and file levels.
-## Supported Blob Storage features
+This design means that Azure Data Lake Storage Gen2 can easily and quickly scale up to meet the most demanding workloads. It can also just as easily scale back down when demand drops.
-Blob Storage features such as [diagnostic logging](../common/storage-analytics-logging.md), [access tiers](access-tiers-overview.md), and [Blob Storage lifecycle management policies](./lifecycle-management-overview.md) are available to your account. Most Blob Storage features are fully supported, but some features are supported only at the preview level or not yet supported.
+## Built on Azure Blob Storage
-To see how each Blob Storage feature is supported with Data Lake Storage Gen2, see [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md).
+The data that you ingest persist as blobs in the storage account. The service that manages blobs is the Azure Blob Storage service. Data Lake Storage Gen2 describes the capabilities or "enhancements" to this service that caters to the demands of big data analytic workloads.
-## Supported Azure service integrations
+Because these capabilities are built on Blob Storage, features such as diagnostic logging, access tiers, and lifecycle management policies are available to your account. Most Blob Storage features are fully supported, but some features might be supported only at the preview level and there are a handful of them that aren't yet supported. For a complete list of support statements, see [Blob Storage feature support in Azure Storage accounts](storage-feature-support-in-storage-accounts.md). The status of each listed feature will change over time as support continues to expand.
-Data Lake Storage gen2 supports several Azure services. You can use them to ingest data, perform analytics, and create visual representations. For a list of supported Azure services, see [Azure services that support Azure Data Lake Storage Gen2](data-lake-storage-supported-azure-services.md).
+## Documentation and terminology
-## Supported open source platforms
+The Azure Blob Storage table of contents features two sections of content. The **Data Lake Storage Gen2** section of content provides best practices and guidance for using Data Lake Storage Gen2 capabilities. The **Blob Storage** section of content provides guidance for account features not specific to Data Lake Storage Gen2.
-Several open source platforms support Data Lake Storage Gen2. For a complete list, see [Open source platforms that support Azure Data Lake Storage Gen2](data-lake-storage-supported-open-source-platforms.md).
+As you move between sections, you might notice some slight terminology differences. For example, content featured in the Blob Storage documentation, will use the term _blob_ instead of _file_. Technically, the files that you ingest to your storage account become blobs in your account. Therefore, the term is correct. However, the term _blob_ can cause confusion if you're used to the term _file_. You'll also see the term _container_ used to refer to a _file system_. Consider these terms as synonymous.
## See also
storage Classic Account Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md
+
+ Title: How to migrate your classic storage accounts to Azure Resource Manager
+
+description: Learn how to migrate your classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024.
+++++ Last updated : 03/27/2023++++
+# How to migrate your classic storage accounts to Azure Resource Manager
+
+Microsoft will retire classic storage accounts on August 31, 2024. To preserve the data in any classic storage accounts, you must migrate them to the Azure Resource Manager deployment model by that date. After you migrate your account, all of the benefits of the Azure Resource Manager deployment model will be available for that account. For more information about the deployment models, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
+
+This article describes how to migrate your classic storage accounts to the Azure Resource Manager deployment model. For more information, see [Migrate your classic storage accounts to Azure Resource Manager by August 31, 2024](classic-account-migration-overview.md).
+
+## Identify classic storage accounts in your subscription
+
+# [Portal](#tab/azure-portal)
+
+To list classic storage accounts in your subscription with the Azure portal:
+
+1. Navigate to the list of your storage accounts in the Azure portal.
+1. Select **Add filter**. In the **Filter** dialog, set the **Filter** field to **Type** and the **Operator** field to **Equals**. Then set the **Value** field to **microsoft.classicstorage/storageaccounts**.
+
+ :::image type="content" source="media/classic-account-migrate/classic-accounts-list-portal.png" alt-text="Screenshot showing how to list classic storage accounts in Azure portal." lightbox="media/classic-account-migrate/classic-accounts-list-portal.png":::
+
+# [PowerShell](#tab/azure-powershell)
+
+To list classic storage accounts in your subscription with PowerShell, run the following command:
+
+```azurepowershell
+Get-AzResource -ResourceType Microsoft.ClassicStorage/storageAccounts
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To list classic storage accounts in your subscription with Azure CLI, run the following command:
+
+```azurecli
+$ az resource list \
+ --resource-type Microsoft.ClassicStorage/storageAccounts \
+ --query "[].{resource_type:type, name:name}" \
+ --output table
+```
+++
+## Migrate a classic storage account
+
+To migrate a classic storage account to the Azure Resource Manager deployment model with the Azure portal:
+
+1. Navigate to your classic storage account in the Azure portal.
+1. In the **Settings** section, click **Migrate to ARM**.
+1. Click on **Validate** to determine migration feasibility.
+
+ :::image type="content" source="./media/classic-account-migrate/validate-storage-account.png" alt-text="Screenshot showing how to migrate your classic storage account to Azure Resource Manager." lightbox="./media/classic-account-migrate/validate-storage-account.png":::
+
+1. After a successful validation, click on **Prepare** button to simulate the migration.
+
+ > [!IMPORTANT]
+ > There may be a delay of a few minutes after validation is complete before the Prepare button is enabled.
+
+1. If the Prepare step completes successfully, you'll see a link to the new resource group. Select that link to navigate to the new resource group. The migrated storage account appears under the **Resources** tab in the **Overview** page for the new resource group.
+
+ At this point you can compare the configuration and data in the classic storage account to the newly migrated storage account. You'll see both in the list of storage accounts in the portal. Both the classic account and the migrated account have the same name.
+
+ :::image type="content" source="media/classic-account-migrate/compare-classic-migrated-accounts.png" alt-text="Screenshot showing the results of the Prepare step in the Azure portal." lightbox="media/classic-account-migrate/compare-classic-migrated-accounts.png":::
+
+1. If you're not satisfied with the results of the migration, select **Abort** to delete the new storage account and resource group. You can then address any problems and try again.
+1. When you're ready to commit, type **yes** to confirm, then select **Commit** to complete the migration.
+
+### Locate and delete disk artifacts in a classic account
+
+Classic storage accounts may contain classic (unmanaged) disks, virtual machine images, and operating system (OS) images. To migrate the account, you may need to delete these artifacts first.
+
+To delete disk artifacts from the Azure portal, follow these steps:
+
+1. Navigate to the Azure portal.
+1. In the **Search** bar at the top, search for **Disks (classic)**, **OS Images (classic)**, or **VM Images (classic)** to display classic disk artifacts.
+1. Locate the classic disk artifact to delete, and select it to view its properties.
+1. Select the **Delete** button to delete the disk artifact.
+
+ :::image type="content" source="media/classic-account-migrate/delete-disk-artifacts-portal.png" alt-text="Screenshot showing how to delete classic disk artifacts in Azure portal." lightbox="media/classic-account-migrate/delete-disk-artifacts-portal.png":::
+
+## See also
+
+- [Migrate your classic storage accounts to Azure Resource Manager by August 31, 2024](classic-account-migration-overview.md)
+- [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md)
storage Classic Account Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md
+
+ Title: We're retiring classic storage accounts on August 31, 2024
+
+description: Overview of migration of classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024.
+++++ Last updated : 03/27/2023++++
+# Migrate your classic storage accounts to Azure Resource Manager by August 31, 2024
+
+The [Azure Resource Manager](../../azure-resource-manager/management/overview.md) deployment model now offers extensive functionality for Azure Storage accounts. For this reason, we deprecated the management of classic storage accounts through Azure Service Manager (ASM) on August 31, 2021. Classic storage accounts will be fully retired on August 31, 2024. All data in classic storage accounts must be migrated to Azure Resource Manager storage accounts by that date.
+
+If you have classic storage accounts, start planning your migration now. Complete it by August 31, 2024, to take advantage of Azure Resource Manager. To learn more about the benefits of Azure Resource Manager, see [The benefits of using Resource Manager](../../azure-resource-manager/management/overview.md#the-benefits-of-using-resource-manager).
+
+Storage accounts created using the classic deployment model will follow the [Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy) for retirement.
+
+## How does this affect me?
+
+- Subscriptions created after August 31, 2022 can no longer create classic storage accounts.
+- Subscriptions created before September 1, 2022 will be able to create classic storage accounts until September 1, 2023.
+- On September 1, 2024, customers will no longer be able to connect to classic storage accounts by using Azure Service Manager. Any data still contained in these accounts will no longer be accessible through Azure Service Manager.
+
+> [!WARNING]
+> If you do not migrate your classic storage accounts to Azure Resource Manager by August 31, 2024, you will permanently lose access to the data in those accounts.
+
+## What resources are available for this migration?
+
+- If you have questions, get answers from community experts in [Microsoft Q&A](/answers/tags/98/azure-storage-accounts).
+- If your organization or company has partnered with Microsoft or works with Microsoft representatives, such as cloud solution architects (CSAs) or customer success account managers (CSAMs), contact them for additional resources for migration.
+- If you have a support plan and you need technical help, create a support request in the Azure portal:
+
+ 1. Search for **Help + support** in the [Azure portal](https://portal.azure.com#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+ 1. Select **Create a support request**.
+ 1. Under **Summary**, type a description of your issue.
+ 1. Under **Issue type**, select **Technical**.
+ 1. Under **Subscription**, select your subscription.
+ 1. Under **Service**, select **My services**.
+ 1. Under **Service type**, select **Storage Account Management**.
+ 1. Under **Resource**, select the resource you want to migrate.
+ 1. Under **Problem type**, select **Data Migration**.
+ 1. Under **Problem subtype**, select **Migrate account to new resource group/subscription/region/tenant**.
+ 1. Select **Next**, then follow the instructions to submit your support request.
+
+## What actions should I take?
+
+To migrate your classic storage accounts, you should:
+
+1. Identify all classic storage accounts in your subscription.
+1. Migrate any classic storage accounts to Azure Resource Manager.
+1. Check your applications and logs to determine whether you are dynamically creating, updating, or deleting classic storage accounts from your code, scripts, or templates. If you are, then you need to update your applications to use Azure Resource Manager accounts instead.
+
+For step-by-step instructions, see [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md).
+
+## See also
+
+- [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md)
+- [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md)
storage Classic Account Migration Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-process.md
+
+ Title: Understand storage account migration from classic to Azure Resource Manager
+
+description: Learn about the process of migrating classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024.
+++++ Last updated : 03/27/2023++++
+# Understand storage account migration from the classic deployment model to Azure Resource Manager
+
+Let's take an in-depth look at the process of migrating storage accounts from the Azure classic deployment model to the Azure Resource Manager deployment model. We look at resources at a resource and feature level to help you understand how the Azure platform migrates resources between the two deployment models. For more information, please read the service announcement article: [Migrate your classic storage accounts to Azure Resource Manager by August 31, 2024](classic-account-migration-overview.md)
+
+## Understand the data plane and management plane
+
+First, it's helpful to understand the basic architecture of Azure Storage. Azure Storage offers services that store data, including Blob Storage, Azure Data Lake Storage, Azure Files, Queue Storage, and Table Storage. These services and the operations they expose comprise the *data plane* for Azure Storage. Azure Storage also exposes operations for managing an Azure Storage account and related resources, such as redundancy SKUs, account keys, and certain policies. These operations comprise the *management* or *control* plane.
++
+During the migration process, Microsoft translates the representation of the storage account resource from the classic deployment model to the Azure Resource Manager deployment model. As a result, you need to use new tools, APIs, and SDKs to manage your storage accounts and related resources after the migration.
+
+The data plane is unaffected by migration from the classic deployment model to the Azure Resource Manager model. The data in your migrated storage account will be identical to the data in the original storage account.
+
+## The migration experience
+
+You can migrate your classic storage account with the Azure portal, PowerShell, or Azure CLI. To learn how to migrate your account, see [Migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md).
+
+Before you start the migration:
+
+- Ensure that the storage accounts that you want to migrate don't use any unsupported features or configurations. Usually the platform detects these issues and generates an error.
+- Plan your migration during non-business hours to accommodate for any unexpected failures that might happen during migration.
+- Evaluate any Azure role-based access control (Azure RBAC) roles that are configured on the classic storage account, and plan for after the migration is complete.
+- If possible, halt write operations to the storage account for the duration of the migration.
+
+There are four steps to the migration process, as shown in the following diagram:
++
+1. **Validate**. During the Validation phase, Azure checks the storage account to ensure that it can be migrated.
+1. **Prepare**. In the Prepare phase, Azure creates a new general-purpose v1 storage account and alerts you to any problems that may have occurred. The new account is created in a new resource group in the same region as your classic account. All of your data has been migrated to the new account.
+
+ At this point your classic storage account still exists and contains all of your data. If there are any problems reported, you can correct them or abort the process.
+
+1. **Check manually**. It's a good idea to make a manual check of the new storage account to make sure that the output is as you expect.
+1. **Commit or abort**. If you are satisfied that the migration has been successful, then you can commit the migration. Committing the migration permanently deletes the classic storage account.
+
+ If there are any problems with the migration, then you can abort the migration at this point. If you choose to abort, the new resource group and new storage account are deleted. Your classic account remains available. You can address any problems and attempt the migration again.
+
+> [!NOTE]
+> The operations described in the following sections are all idempotent. If you have a problem other than an unsupported feature or a configuration error, retry the prepare, abort, or commit operation. Azure tries the action again.
+
+### Validate
+
+The Validation step is the first step in the migration process. The goal of this step is to analyze the state of the resources that you want to migrate from the classic deployment model. The Validation step evaluates whether the resources are capable of migration (success or failure). If the classic storage account is not capable of migration, Azure lists the reasons why.
+
+The Validation step analyzes the state of resources in the classic deployment model. It checks for failures and unsupported scenarios due to different configurations of the storage account in the classic deployment model.
+
+The Validation step does not check for virtual machine (VM) disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they support VM disks.
+
+Keep in mind that it's not possible to check for every constraint that the Azure Resource Manager stack might impose on the storage account during migration. Some constraints are only checked when the resources undergo transformation in the next step of migration (the Prepare step).
+
+### Prepare
+
+The Prepare step is the second step in the migration process. The goal of this step is to simulate the transformation of the storage account from the classic deployment model to the Azure Resource Manager deployment model. The Prepare step also enables you to compare the storage account in the classic deployment model to the migrated storage account in Azure Resource Manager.
+
+> [!IMPORTANT]
+> Your classic storage account is not modified during this step. It's a safe step to run if you're trying out migration.
+
+If the storage account is not capable of migration, Azure stops the migration process and lists the reason why the Prepare step failed.
+
+If the storage account is capable of migration, Azure blocks management plane operations for the storage account under migration. For example, you cannot regenerate the storage account keys while the Prepare phase is underway. Azure then creates a new resource group as the classic storage account. The name of the new resource group follows the pattern `<classic-account-name>-Migrated`.
+
+> [!NOTE]
+> It is not possible to select the name of the resource group that is created for a migrated storage account. After migration is complete, however, you can use the move feature of Azure Resource Manager to move your migrated storage account to a different resource group. For more information, see [Move resources to a new subscription or resource group](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
+
+Finally, Azure migrates the storage account and all of its data and configurations to a new storage account in Azure Resource Manager in the same region as the classic storage account. At this point your classic storage account still exists and contains all of your data. If there are any problems reported during the Prepare step, you can correct them or abort the process.
+
+### Check manually
+
+After the Prepare step is complete, both accounts exist in your subscription, so that you can review and compare the classic storage account in the pre-migration state and in Azure Resource Manager. For example, you can examine the new account via the Azure portal to ensure that the storage account's configuration is as expected.
+
+There is no set window of time before which you need to commit or abort the migration. You can take as much time as you need for the Check phase. However, management plane operations are blocked for the classic storage account until you either abort or commit.
+
+### Abort
+
+To revert your changes to the classic deployment model, you can choose to abort the migration. Aborting the migration deletes the new storage account and new resource group. Your classic storage account is not affected if you choose to abort the migration.
+
+> [!CAUTION]
+> You cannot abort the migration after you have committed the migration. Make sure that you have checked your migrated storage account carefully for errors before you commit.
+
+### Commit
+
+After you are satisfied that your classic storage account has been migrated successfully, you can commit the migration. Committing the migration deletes your classic storage account. Your data is now available only in the newly migrated account in the Resource Manager deployment model.
+
+> [!NOTE]
+> Committing the migration is an idempotent operation. If it fails, retry the operation. If it continues to fail, create a support ticket or ask a question on [Microsoft Q&A](/answers/https://docsupdatetracker.net/index.html)
+
+## After the migration
+
+After the migration is complete, your new storage account is ready for use. You can resume write operations at this point to the storage account.
+
+### Migrated account type
+
+After the migration is complete, your new storage account is a general-purpose v1 storage account. We recommend upgrading to a general-purpose v2 account to take advantage of the newest features that Azure Storage has to offer for security, data protection, lifecycle management, and more. To learn how to upgrade to a general-purpose v2 storage account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).
+
+### Account properties
+
+Any properties that are set on the classic storage account are migrated with their values to the new storage account.
+
+### RBAC role assignments
+
+Any RBAC role assignments that are scoped to the classic storage account are maintained after the migration.
+
+### Account keys
+
+The account keys are not changed or rotated during the migration. You do not need to regenerate your account keys after the migration is complete. You will not need to update connection strings in any applications that are using the account keys after the migration.
+
+### Portal support
+
+You can manage your migrated storage accounts in the [Azure portal](https://portal.azure.com). You will not be able to use the classic portal to manage your migrated storage accounts.
+
+## See also
+
+- [Migrate your classic storage accounts to Azure Resource Manager by August 31, 2024](classic-account-migration-overview.md)
+- [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md)
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Alternately, you can delete the resource group, which deletes the storage accoun
- [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md) - [Move a storage account to another region](storage-account-move.md) - [Recover a deleted storage account](storage-account-recover.md)-- [Migrate a classic storage account](storage-account-migrate-classic.md)
+- [Migrate a classic storage account](classic-account-migrate.md)
storage Storage Account Migrate Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-migrate-classic.md
- Title: Migrate a classic storage account-
-description: Learn how to migrate your classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 1, 2024.
----- Previously updated : 01/20/2023----
-# Migrate a classic storage account to Azure Resource Manager
-
-Microsoft will retire classic storage accounts on August 1, 2024. To preserve the data in any classic storage accounts, you must migrate them to the Azure Resource Manager deployment model by that date. After you migrate your account, all of the benefits of the Azure Resource Manager deployment model will be available for that account. For more information about the deployment models, see [Resource Manager and classic deployment](../../azure-resource-manager/management/deployment-models.md).
-
-This article describes how to migrate your classic storage accounts to the Azure Resource Manager deployment model.
-
-## Migrate a classic storage account
-
-# [Portal](#tab/azure-portal)
-
-To migrate a classic storage account to the Azure Resource Manager deployment model with the Azure portal:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your classic storage account.
-1. In the **Settings** section, click **Migrate to ARM**.
-1. Click on **Validate** to determine migration feasibility.
-
- :::image type="content" source="./media/storage-account-migrate-classic/validate-storage-account.png" alt-text="Screenshot showing how to migrate your classic storage account to Azure Resource Manager.":::
-
-1. After a successful validation, click on **Prepare** to begin the migration.
-1. Type **yes** to confirm, then select **Commit** to complete the migration.
-
-# [PowerShell](#tab/azure-powershell)
-
-To migrate a classic storage account to the Azure Resource Manager deployment model with PowerShell, first validate that the account is ready for migration by running the following command. Remember to replace the placeholder values in brackets with your own values:
-
-```azurepowershell
-$storageAccountName = "<storage-account>"
-Move-AzureStorageAccount -Validate -StorageAccountName $storageAccountName
-```
-
-Next, prepare the account for migration:
-
-```azurepowershell
-Move-AzureStorageAccount -Prepare -StorageAccountName $storageAccountName
-```
-
-Check the configuration for the prepared storage account with either Azure PowerShell or the Azure portal. If you're not ready for migration, use the following command to revert your account to its previous state:
-
-```azurepowershell
-Move-AzureStorageAccount -Abort -StorageAccountName $storageAccountName
-```
-
-Finally, when you are satisfied with the prepared configuration, move forward with the migration and commit the resources with the following command:
-
-```azurepowershell
-Move-AzureStorageAccount -Commit -StorageAccountName $storageAccountName
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-To migrate a classic storage account to the Azure Resource Manager deployment model with the Azure CLI, first prepare the account for migration by running the following command. Remember to replace the placeholder values in brackets with your own values:
-
-```azurecli
-azure storage account prepare-migration <storage-account>
-```
-
-Check the configuration for the prepared storage account with either Azure CLI or the Azure portal. If you're not ready for migration, use the following command to revert your account to its previous state:
-
-```azurecli
-azure storage account abort-migration <storage-account>
-```
-
-Finally, when you are satisfied with the prepared configuration, move forward with the migration and commit the resources with the following command:
-
-```azurecli
-azure storage account commit-migration <storage-account>
-```
---
-## See also
--- [Create a storage account](storage-account-create.md)-- [Move an Azure Storage account to another region](storage-account-move.md)-- [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md)-- [Get storage account configuration information](storage-account-get-info.md)
storage Redundancy Premium File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/redundancy-premium-file-shares.md
+
+ Title: Azure Files zone-redundant storage (ZRS) support for premium file shares
+description: ZRS is supported for premium Azure file shares through the FileStorage storage account kind. Use this reference to determine the Azure regions in which ZRS is supported.
++++ Last updated : 03/29/2023+++++
+# Azure Files zone-redundant storage for premium file shares
+
+Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region.
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+
+## Premium file share accounts
+ZRS is supported for premium Azure file shares through the `FileStorage` storage account kind.
++
+## See also
+
+- [Azure Storage redundancy](../common/storage-redundancy.md)
storage Storage Files Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring-reference.md
Title: Azure Files monitoring data reference description: Log and metrics reference for monitoring data from Azure Files.-+ Previously updated : 10/02/2020- Last updated : 03/29/2023+
The following tables list the platform metrics collected for Azure Files.
### Capacity metrics
-Capacity metrics values are refreshed daily (up to 24 Hours). The time grain defines the time interval for which metrics values are presented. The supported time grain for all capacity metrics is one hour (PT1H).
+Capacity metrics values are refreshed daily (up to 24 hours). The time grain defines the time interval for which metrics values are presented. The supported time grain for all capacity metrics is one hour (PT1H).
Azure Files provides the following capacity metrics in Azure Monitor.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
Azure file share scale targets apply at the file share level.
<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names. ### File scale targets
-File scale targets apply to individual files stored in Azure file shares.
+File scale targets apply to individual files stored in Azure file shares. Soft limits and throttling can occur beyond these limits.
| Attribute | Files in standard file shares | Files in premium file shares | |-|-|-|
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
To view the maintenance schedule that has been applied to your Synapse SQL pool,
![Overview blade](./media/maintenance-scheduling/clear-overview-blade.PNG)
+## Skip or disable maintenance schedule
+
+To ensure compliance with latest security requirements, we are unable to accommodate requests to skip or delay these updates. However, you may have some options to adjust your maintenance window within the current cycle depending on your situation:
+- If you receive a pending notification for maintenance, and you need more time to finish your jobs or notify your team, you can change the window start time as long as you do so before the beginning of your defined maintenance window. This will shift your window forward in time within the cycle. Note that if you change the window to a start time before the actual present time, maintenance will be triggered immediately.
+- You can manually trigger the maintenance by pausing and resuming (or scaling) your SQL Dedicated pool after the start of a cycle for which a "Pending" notification has been received. The weekend maintenance cycle starts on Saturday at 00:00 UTC; the midweek maintenance cycle starts Tuesday at 12:00 UTC.
+- Although we do require a minimum window of 3 hours, note that maintenance usually takes less than 30 minutes to complete, but it may take longer in some cases. For example, if there are active transactions when the maintenance starts, they will be aborted and rolled back, which may cause delays in coming back online. To avoid this scenario, we recommend that you ensure that no long-running transactions are active during the start of your maintenance window.
+ ## Change a maintenance schedule A maintenance schedule can be updated or changed at any time. If the selected instance is going through an active maintenance cycle, the settings will be saved. They'll become active during the next identified maintenance period. [Learn more](../../service-health/resource-health-overview.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) about monitoring your data warehouse during an active maintenance event.
traffic-manager Traffic Manager Endpoint Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-endpoint-types.md
Title: Traffic Manager Endpoint Types | Microsoft Docs
-description: This article explains different types of endpoints that can be used with Azure Traffic Manager
+ Title: Traffic Manager Endpoint Types
+description: Learn about the different types of endpoints that can be used with Azure Traffic Manager.
- -+ Previously updated : 01/21/2021 Last updated : 03/30/2023 + # Traffic Manager endpoints
-Microsoft Azure Traffic Manager allows you to control how network traffic is distributed to application deployments running in different datacenters. You configure each application deployment as an 'endpoint' in Traffic Manager. When Traffic Manager receives a DNS request, it chooses an available endpoint to return in the DNS response. Traffic manager bases the choice on the current endpoint status and the traffic-routing method. For more information, see [How Traffic Manager Works](traffic-manager-how-it-works.md).
+Azure Traffic Manager allows you to control how network traffic is distributed to application deployments running in different datacenters. You configure each application deployment as an 'endpoint' in Traffic Manager. When Traffic Manager receives a DNS request, it chooses an available endpoint to return in the DNS response. Traffic manager bases the choice on the current endpoint status and the traffic-routing method. For more information, see [How Traffic Manager works](traffic-manager-how-it-works.md).
There are three types of endpoint supported by Traffic
-* **Azure endpoints** are used for services hosted in Azure.
-* **External endpoints** are used for IPv4/IPv6 addresses, FQDNs, or for services hosted outside Azure. Theses services can either be on-premises or with a different hosting provider.
-* **Nested endpoints** are used to combine Traffic Manager profiles to create more flexible traffic-routing schemes to support the needs of larger, more complex deployments.
+* [**Azure endpoints**](#azure-endpoints) are used for services hosted in Azure.
+* [**External endpoints**](#external-endpoints) are used for IPv4/IPv6 addresses, FQDNs, or for services hosted outside Azure. Theses services can either be on-premises or with a different hosting provider.
+* [**Nested endpoints**](#nested-endpoints) are used to combine Traffic Manager profiles to create more flexible traffic-routing schemes to support the needs of larger, more complex deployments.
There's no restriction on how endpoints of different types are combined in a single Traffic Manager profile. Each profile can contain any mix of endpoint types.
When using Azure endpoints, Traffic Manager detects when a Web App is stopped an
## External endpoints
-External endpoints are used for either IPv4/IPv6 addresses, FQDNs, or for services outside of Azure. Use of IPv4/IPv6 address endpoints allows traffic manager to check the health of endpoints without requiring a DNS name for them. As a result, Traffic Manager can respond to queries with A/AAAA records when returning that endpoint in a response. Services outside of Azure can include a service hosted on-premises or with a different provider. External endpoints can be used individually or combined with Azure Endpoints in the same Traffic Manager profile. The exception is for endpoints that are specified as IPv4 or IPv6 addresses, which can only be external endpoints. Combining Azure endpoints with External endpoints enables various scenarios:
+External endpoints are used for either IPv4/IPv6 addresses, FQDNs, or for services outside of Azure. Use of IPv4/IPv6 address endpoints allows Traffic Manager to check the health of endpoints without requiring a DNS name for them. As a result, Traffic Manager can respond to queries with A/AAAA records when returning that endpoint in a response. Services outside of Azure can include a service hosted on-premises or with a different provider. External endpoints can be used individually or combined with Azure Endpoints in the same Traffic Manager profile. The exception is for endpoints that are specified as IPv4 or IPv6 addresses, which can only be external endpoints. Combining Azure endpoints with External endpoints enables various scenarios:
* Provide increased redundancy for an existing on-premises application in either an active-active or active-passive failover model using Azure. * Route traffic to endpoints that don't have a DNS name associated with them. Also reduces the overall DNS lookup latency by removing the need to run a second DNS query to get an IP address of a DNS name returned.
-* Reduce application latency for users around the world, extend an existing on-premises application to other geographic locations in Azure. For more information, see [Traffic Manager 'Performance' traffic routing](traffic-manager-routing-methods.md#performance).
+* Reduce application latency for users around the world, extend an existing on-premises application to other geographic locations in Azure. For more information, see [Performance traffic routing](traffic-manager-routing-methods.md#performance).
* Provide more capacity for an existing on-premises application, either continuously or as a 'burst-to-cloud' solution to meet a spike in demand using Azure.
-In some cases, it's useful to use External endpoints to reference Azure services. See the [FAQ](traffic-manager-faqs.md#traffic-manager-endpoints) for examples. Health checks are billed at the Azure endpoints rate, not the External endpoints rate. Unlike Azure endpoints, if you stop or delete the underlying service the health check billing continues. The billing will stop once you disable or delete the endpoint in Traffic Manager.
+In some cases, it's useful to use External endpoints to reference Azure services. See the [FAQ](traffic-manager-faqs.md#traffic-manager-endpoints) for examples. Health checks are billed at the Azure endpoints rate, not the External endpoints rate. Unlike Azure endpoints, if you stop or delete the underlying service, the health check billing continues. The billing stops once you disable or delete the endpoint in Traffic Manager.
## Nested endpoints
-Nested endpoints combine multiple Traffic Manager profiles to create flexible traffic-routing schemes to support the needs of larger and complex deployments. With Nested endpoints, a 'child' profile is added as an endpoint to a 'parent' profile. Both the child and parent profiles can contain other endpoints of any type, including other nested profiles.
+Nested endpoints combine multiple Traffic Manager profiles to create flexible traffic-routing schemes to support the needs of larger and complex deployments. With Nested endpoints, a ***child*** profile is added as an endpoint to a ***parent*** profile. Both the child and parent profiles can contain other endpoints of any type, including other nested profiles.
-For more information, see [nested Traffic Manager profiles](traffic-manager-nested-profiles.md).
+For more information, see [Nested Traffic Manager profiles](traffic-manager-nested-profiles.md).
## Web Apps as endpoints Some more considerations apply when configuring Web Apps as endpoints in Traffic
-1. Only Web Apps at the 'Standard' SKU or above are eligible for use with Traffic Manager. Attempts to add a Web App of a lower SKU fail. Downgrading the SKU of an existing Web App results in Traffic Manager no longer sending traffic to that Web App. For more information on supported plans, see the [App Service Plans](https://azure.microsoft.com/pricing/details/app-service/plans/)
-2. When an endpoint receives an HTTP request, it uses the 'host' header in the request to determine which Web App should service the request. The host header contains the DNS name used to start the request, for example 'contosoapp.azurewebsites.net'. To use a different DNS name with your Web App, the DNS name must be registered as a custom domain name for the App. When adding a Web App endpoint as an Azure endpoint, the Traffic Manager profile DNS name is automatically registered for the App. This registration is automatically removed when the endpoint is deleted.
+1. Only Web Apps at the Standard SKU or higher are eligible for use with Traffic Manager. Attempts to add a Web App of a lower SKU fail. Downgrading the SKU of an existing Web App results in Traffic Manager no longer sending traffic to that Web App. For more information on supported plans, see the [App Service Plans](https://azure.microsoft.com/pricing/details/app-service/plans/).
+2. When an endpoint receives an HTTP request, it uses the *host* header in the request to determine which Web App should service the request. The host header contains the DNS name used to start the request, for example `contosoapp.azurewebsites.net`. To use a different DNS name with your Web App, the DNS name must be registered as a custom domain name for the App. When adding a Web App endpoint as an Azure endpoint, the Traffic Manager profile DNS name is automatically registered for the App. This registration is automatically removed when the endpoint is deleted.
3. Each Traffic Manager profile can have at most one Web App endpoint from each Azure region. To work around for this constraint, you can configure a Web App as an External endpoint. For more information, see the [FAQ](traffic-manager-faqs.md#traffic-manager-endpoints). ## Enabling and disabling endpoints Disabling an endpoint in Traffic Manager can be useful to temporarily remove traffic from an endpoint that is in maintenance mode or being redeployed. Once the endpoint is running again, it can be re-enabled.
-You can enable or disable endpoints via the Traffic Manager portal, PowerShell, CLI, or REST API.
+You can enable or disable Traffic Manager endpoints using the Azure portal, PowerShell, CLI, or REST API.
> [!NOTE]
-> Disabling an Azure endpoint has nothing to do with its deployment state in Azure. An Azure service (such as a VM or Web App remains running and able to receive traffic even when disabled in Traffic Manager. Traffic can be addressed directly to the service instance rather than via the Traffic Manager profile DNS name. For more information, see [how Traffic Manager works](traffic-manager-how-it-works.md).
+> Disabling an Azure endpoint has nothing to do with its deployment state in Azure. An Azure service (such as a VM or Web App) remains running and able to receive traffic even when disabled in Traffic Manager. Traffic can be addressed directly to the service instance rather than via the Traffic Manager profile DNS name. For more information, see [How Traffic Manager works](traffic-manager-how-it-works.md).
The current eligibility of each endpoint to receive traffic depends on the following factors:
For details, see [Traffic Manager endpoint monitoring](traffic-manager-monitorin
> [!NOTE] > Since Traffic Manager works at the DNS level, it is unable to influence existing connections to any endpoint. When an endpoint is unavailable, Traffic Manager directs new connections to another available endpoint. However, the host behind the disabled or unhealthy endpoint may continue to receive traffic via existing connections until those sessions are terminated. Applications should limit the session duration to allow traffic to drain from existing connections.
-If all endpoints in a profile get disabled, or if the profile itself get disabled, then Traffic Manager sends an 'NXDOMAIN' response to a new DNS query.
+If all endpoints in a profile get disabled, or if the profile itself get disabled, then Traffic Manager sends an `NXDOMAIN` response to a new DNS query.
-## FAQs
+## FAQ
* [Can I use Traffic Manager with endpoints from multiple subscriptions?](./traffic-manager-faqs.md#can-i-use-traffic-manager-with-endpoints-from-multiple-subscriptions)
update-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md
description: The article tells what update management center (preview) in Azure
Previously updated : 04/21/2022 Last updated : 03/23/2023
We also offer other capabilities to help you manage updates for your Azure Virtu
Before you enable your machines for update management center (preview), make sure that you understand the information in the following sections. > [!IMPORTANT]
-> Update management center (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA).
->
-> While update management center is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - Update management center (preview) doesnΓÇÖt store any customer data.
+> - Update management center (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA).
+> - While update management center is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Key benefits
virtual-machines Concepts Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/concepts-restore-points.md
The following table summarizes the support matrix for VM restore points.
**Minimum Frequency at which [crash consistent restore points (preview)](https://github.com/Azure/Virtual-Machine-Restore-Points/tree/main/Crash%20consistent%20VM%20restore%20points%20(preview)) can be taken** | 1 hour > [!Note]- > Restore Points (App consistent or crash consistent) can be created by customer at the minimum supported frequency as mentioned above. Taking restore points at a frequency lower than supported would result in failure. ## Operating system support
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Title: Azure Linux VM Agent Overview
-description: Learn how to install and configure Linux Agent (waagent) to manage your virtual machine's interaction with Azure Fabric Controller.
+ Title: Azure Linux VM Agent overview
+description: Learn how to install and configure Azure Linux Agent (waagent) to manage your virtual machine's interaction with the Azure Fabric controller.
Previously updated : 10/17/2016 Last updated : 03/28/2023 # Understanding and using the Azure Linux Agent
-The Microsoft Azure Linux Agent (waagent) manages Linux & FreeBSD provisioning, and VM interaction with the Azure Fabric Controller. In addition to the Linux Agent providing provisioning functionality, Azure also provides the option of using cloud-init for some Linux OSes. The Linux Agent provides the following functionality for Linux and FreeBSD IaaS deployments:
+The Microsoft Azure Linux Agent (waagent) manages Linux and FreeBSD provisioning, and virtual machine (VM) interaction with the Azure Fabric controller. In addition to the Linux agent providing provisioning functionality, Azure also provides the option of using `cloud-init` for some Linux operating systems.
-> [!NOTE]
-> For more information, see the [README](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
->
->
-* **Image Provisioning**
-
- * Creation of a user account
- * Configuring SSH authentication types
- * Deployment of SSH public keys and key pairs
- * Setting the host name
- * Publishing the host name to the platform DNS
- * Reporting SSH host key fingerprint to the platform
- * Resource Disk Management
- * Formatting and mounting the resource disk
- * Configuring swap space
-* **Networking**
-
- * Manages routes to improve compatibility with platform DHCP servers
- * Ensures the stability of the network interface name
-* **Kernel**
-
- * Configures virtual NUMA (disable for kernel <`2.6.37`)
- * Consumes Hyper-V entropy for /dev/random
- * Configures SCSI timeouts for the root device (which could be remote)
-* **Diagnostics**
-
- * Console redirection to the serial port
-* **SCVMM Deployments**
-
- * Detects and bootstraps the VMM agent for Linux when running in a System Center Virtual Machine Manager 2012 R2 environment
-* **VM Extension**
-
- * Inject component authored by Microsoft and Partners into Linux VM (IaaS) to enable software and configuration automation
- * VM Extension reference implementation on [https://github.com/Azure/azure-linux-extensions](https://github.com/Azure/azure-linux-extensions)
+The Linux agent provides the following functionality for Linux and FreeBSD Azure Virtual Machines deployments. For more information, see [Microsoft Azure Linux Agent](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
+
+### Image provisioning
+
+- Creates a user account
+- Configures SSH authentication types
+- Deploys SSH public keys and key pairs
+- Sets the host name
+- Publishes the host name to the platform DNS
+- Reports SSH host key fingerprint to the platform
+- Manages resource disk
+- Formats and mounts the resource disk
+- Configures swap space
+
+### Networking
+
+- Manages routes to improve compatibility with platform DHCP servers
+- Ensures the stability of the network interface name
+
+### Kernel
+
+- Configures virtual NUMA (disable for kernel <`2.6.37`)
+- Consumes Hyper-V entropy for */dev/random*
+- Configures SCSI timeouts for the root device, which can be remote
+
+### Diagnostics
+
+- Console redirection to the serial port
+
+### System Center Virtual Machine Manager deployments
+
+- Detects and bootstraps the Virtual Machine Manager agent for Linux when running in a System Center Virtual Machine Manager 2012 R2 environment
+
+### VM Extension
+
+- Injects component authored by Microsoft and partners into Linux VMs to enable software and configuration automation
+- VM Extension reference implementation on [https://github.com/Azure/azure-linux-extensions](https://github.com/Azure/azure-linux-extensions)
## Communication
-The information flow from the platform to the agent occurs via two channels:
+The information flow from the platform to the agent occurs by using two channels:
-* A boot-time attached DVD for IaaS deployments. This DVD includes an OVF-compliant configuration file that includes all provisioning information other than the actual SSH keypairs.
-* A TCP endpoint exposing a REST API used to obtain deployment and topology configuration.
+- A boot-time attached DVD for VM deployments. This DVD includes an Open Virtualization Format (OVF)-compliant configuration file that includes all provisioning information other than the SSH key pairs.
+- A TCP endpoint exposing a REST API used to obtain deployment and topology configuration.
## Requirements The following systems have been tested and are known to work with the Azure Linux Agent: > [!NOTE]
-> This list may differ from the official list of [supported distros](../linux/endorsed-distros.md).
->
->
+> This list might differ from the [Endorsed Linux distributions on Azure](../linux/endorsed-distros.md).
-### Linux DistroΓÇÖs Supported
-| **Linux Distro** | **x64** | **ARM64** |
+| Distribution | x64 | ARM64 |
|:--|:--:|:--:|
-| Alma Linux | 9.x+ | 9.x+ |
-| CentOS | 7.x+, 8.x+ | 7.x+ |
-| Debian | 10+ | 11.x+ |
-| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
-| openSUSE | 12.3+ | <span style="color:red">Not Supported</span> |
-| Oracle Linux | 6.4+, 7.x+, 8.x+ | <span style="color:red">Not Supported</span> |
-| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
-| Rocky Linux | 9.x+ | 9.x+ |
-| SLES | 12.x+, 15.x+ | 15.x SP4+ |
-| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
-
+| Alma Linux | 9.x+ | .x+ |
+| CentOS | 7.x+, 8.x+ | 7.x+ |
+| Debian | 10+ | 11.x+ |
+| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| openSUSE | 12.3+ | **Not Supported** |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | **Not Supported** |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
+| Rocky Linux | 9.x+ | 9.x+ |
+| SLES | 12.x+, 15.x+ | 15.x SP4+ |
+| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
> [!IMPORTANT]
-> RHEL/Oracle Linux 6.10 is the only RHEL/OL 6 version with ELS support available, [the extended maintenance ends on 06/30/2024](https://access.redhat.com/support/policy/updates/errata)
+> RHEL/Oracle Linux 6.10 is the only RHEL/OL 6 version with ELS support available. [The extended maintenance ends on June 30, 2024](https://access.redhat.com/support/policy/updates/errata).
Other Supported Systems:
-* FreeBSD 10+ (Azure Linux Agent v2.0.10+)
+- FreeBSD 10+ (Azure Linux Agent v2.0.10+)
The Linux agent depends on some system packages in order to function properly:
-* Python 2.6+
-* OpenSSL 1.0+
-* OpenSSH 5.3+
-* Filesystem utilities: sfdisk, fdisk, mkfs, parted
-* Password tools: chpasswd, sudo
-* Text processing tools: sed, grep
-* Network tools: ip-route
-* Kernel support for mounting UDF filesystems.
+- Python 2.6+
+- OpenSSL 1.0+
+- OpenSSH 5.3+
+- File system utilities: sfdisk, fdisk, mkfs, parted
+- Password tools: chpasswd, sudo
+- Text processing tools: sed, grep
+- Network tools: ip-route
+- Kernel support for mounting UDF file systems.
Ensure your VM has access to IP address 168.63.129.16. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). ## Installation
-Installation using an RPM or a DEB package from your distribution's package repository is the preferred method of installing and upgrading the Azure Linux Agent. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux agent package into their images and repositories.
+The preferred method of installing and upgrading the Azure Linux Agent uses an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux agent package into their images and repositories.
-Refer to the documentation in the [Azure Linux Agent repo on GitHub](https://github.com/Azure/WALinuxAgent) for advanced installation options, such as installing from source or to custom locations or prefixes.
+For advanced installation options, such as installing from source or to custom locations or prefixes, see [Microsoft Azure Linux Agent](https://github.com/Azure/WALinuxAgent).
-## Command-Line Options
+## Command-line options
### Flags
-* verbose: Increase verbosity of specified command
-* force: Skip interactive confirmation for some commands
+- `verbose`: Increase verbosity of specified command
+- `force`: Skip interactive confirmation for some commands
### Commands
-* help: Lists the supported commands and flags.
-* deprovision: Attempt to clean the system and make it suitable for reprovisioning. The following operation deletes:
-
- * All SSH host keys (if Provisioning.RegenerateSshHostKeyPair is 'y' in the configuration file)
- * Nameserver configuration in `/etc/resolv.conf`
- * Root password from `/etc/shadow` (if Provisioning.DeleteRootPassword is 'y' in the configuration file)
- * Cached DHCP client leases
- * Resets host name to localhost.localdomain
-
-> [!WARNING]
-> Deprovisioning does not guarantee that the image is cleared of all sensitive information and suitable for redistribution.
-
->
->
-
-* deprovision+user: Performs everything in -deprovision (above) and also deletes the last provisioned user account (obtained from `/var/lib/waagent`) and associated data. This parameter is when de-provisioning an image that was previously provisioning on Azure so it may be captured and reused.
-* version: Displays the version of waagent
-* serialconsole: Configures GRUB to mark ttyS0 (the first serial port) as
- the boot console. This ensures that kernel bootup logs are sent to the
- serial port and made available for debugging.
-* daemon: Run waagent as a daemon to manage interaction with the platform. This argument is specified to waagent in the waagent init script.
-* start: Run waagent as a background process
+- `help`: Lists the supported commands and flags
+- `deprovision`: Attempt to clean the system and make it suitable for reprovisioning. The operation deletes:
+ - All SSH host keys, if `Provisioning.RegenerateSshHostKeyPair` is `y` in the configuration file
+ - Nameserver configuration in */etc/resolv.conf*
+ - Root password from */etc/shadow*, if `Provisioning.DeleteRootPassword` is `y` in the configuration file
+ - Cached DHCP client leases
+ - Resets host name to `localhost.localdomain`
+
+ > [!WARNING]
+ > Deprovisioning doesn't guarantee that the image is cleared of all sensitive information and suitable for redistribution.
+
+- `deprovision+user`: Performs everything in `deprovision` (previous) and also deletes the last provisioned user account, obtained from */var/lib/waagent*, and associated data. Use this parameter when you deprovision an image that was previously provisioned on Azure so that it can be captured and reused.
+- `version`: Displays the version of waagent.
+- `serialconsole`: Configures GRUB to mark ttyS0, the first serial port, as the boot console. This option ensures that kernel boot logs are sent to the serial port and made available for debugging.
+- `daemon`: Run waagent as a daemon to manage interaction with the platform. This argument is specified to waagent in the waagent *init* script.
+- `start`: Run waagent as a background process.
## Configuration
-A configuration file (/etc/waagent.conf) controls the actions of waagent.
-The following shows a sample configuration file:
+The */etc/waagent.conf* configuration file controls the actions of waagent. This example is a sample configuration file:
```config Provisioning.Enabled=y
HttpProxy.Port=None
AutoUpdate.Enabled=y ```
-The following various configuration options are described. Configuration options are of three types; Boolean, String, or Integer. The Boolean configuration options can be specified as "y" or "n". The special keyword "None" may be used for some string type configuration entries as the following details:
+Configuration options are of three types: `Boolean`, `String`, or `Integer`. The `Boolean` configuration options can be specified as `y` or `n`. The special keyword `None` might be used for some string type configuration entries.
-**Provisioning.Enabled:**
+### Provisioning.Enabled
```txt Type: Boolean Default: y ```
-This allows the user to enable or disable the provisioning functionality in the agent. Valid values are "y" or "n". If provisioning is disabled, SSH host and user keys in the image are preserved and any configuration specified in the Azure provisioning API is ignored.
+This option allows the user to enable or disable the provisioning functionality in the agent. Valid values are `y` and `n`. If provisioning is disabled, SSH host and user keys in the image are preserved and configuration in the Azure provisioning API is ignored.
> [!NOTE]
-> The `Provisioning.Enabled` parameter defaults to "n" on Linux Images that use cloud-init for provisioning.
->
->
+> The `Provisioning.Enabled` parameter defaults to `n` on Ubuntu Cloud Images that use cloud-init for provisioning.
-**Provisioning.DeleteRootPassword:**
+### Provisioning.DeleteRootPassword
```txt Type: Boolean Default: n ```
-If set, the root password in the /etc/shadow file is erased during the provisioning process.
+If `y`, the agent erases the root password in the */etc/shadow* file during the provisioning process.
-**Provisioning.RegenerateSshHostKeyPair:**
+### Provisioning.RegenerateSshHostKeyPair
```txt Type: Boolean Default: y ```
-If set, all SSH host key pairs (ecdsa, dsa, and rsa) are deleted during the provisioning process from `/etc/ssh/`. And a single fresh key pair is generated.
+If `y`, the agent deletes all SSH host key pairs from */etc/ssh/* during the provisioning process, including ECDSA, DSA, and RSA. The agent generates a single fresh key pair.
-The encryption type for the fresh key pair is configurable by the Provisioning.SshHostKeyPairType entry. Some distributions re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted (for example, upon a reboot).
+Configure the encryption type for the fresh key pair by using the `Provisioning.SshHostKeyPairType` entry. Some distributions re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted, for example, upon a reboot.
-**Provisioning.SshHostKeyPairType:**
+### Provisioning.SshHostKeyPairType
```txt Type: String Default: rsa ```
-This can be set to an encryption algorithm type that is supported by the SSH daemon on the virtual machine. The typically supported values are "rsa", "dsa" and "ecdsa". "putty.exe" on Windows does not support "ecdsa". So, if you intend to use putty.exe on Windows to connect to a Linux deployment, use "rsa" or "dsa".
+This option can be set to an encryption algorithm type that the SSH daemon supports on the VM. The typically supported values are `rsa`, `dsa`, and `ecdsa`. *putty.exe* on Windows doesn't support `ecdsa`. If you intend to use *putty.exe* on Windows to connect to a Linux deployment, use `rsa` or `dsa`.
-**Provisioning.MonitorHostName:**
+### Provisioning.MonitorHostName
```txt Type: Boolean Default: y ```
-If set, waagent monitors the Linux virtual machine for hostname changes (as returned by the "hostname" command) and automatically update the networking configuration in the image to reflect the change. In order to push the name change to the DNS servers, networking is restarted in the virtual machine. This results in brief loss of Internet connectivity.
+If `y`, waagent monitors the Linux VM for a host name change, as returned by the `hostname` command, and automatically updates the networking configuration in the image to reflect the change. In order to push the name change to the DNS servers, networking restarts on the VM. This restart results in brief loss of internet connectivity.
-**Provisioning.DecodeCustomData:**
+### Provisioning.DecodeCustomData
```txt Type: Boolean Default: n ```
-If set, waagent decodes CustomData from Base64.
+If `y`, waagent decodes `CustomData` from Base64.
-**Provisioning.ExecuteCustomData:**
+### Provisioning.ExecuteCustomData
```txt Type: Boolean Default: n ```
-If set, waagent executes CustomData after provisioning.
+If `y`, waagent runs `CustomData` after provisioning.
-**Provisioning.AllowResetSysUser:**
+### Provisioning.AllowResetSysUser
```txt Type: Boolean Default: n ```
-This option allows the password for the sys user to be reset; default is disabled.
+This option allows the password for the system user to be reset. The default is disabled.
-**Provisioning.PasswordCryptId:**
+### Provisioning.PasswordCryptId
```txt Type: String Default: 6 ```
-Algorithm used by crypt when generating password hash.
- 1 - MD5
- 2a - Blowfish
- 5 - SHA-256
- 6 - SHA-512
+This option specifies the algorithm used by crypt when generating password hash. Valid values are:
+
+- 1: MD5
+- 2a: - Blowfish
+- 5: SHA-256
+- 6: SHA-512
-**Provisioning.PasswordCryptSaltLength:**
+### Provisioning.PasswordCryptSaltLength
```txt Type: String Default: 10 ```
-Length of random salt used when generating password hash.
+This option specifies the length of random salt used when generating password hash.
-**ResourceDisk.Format:**
+### ResourceDisk.Format
```txt Type: Boolean Default: y ```
-If set, the resource disk provided by the platform is formatted and mounted by waagent if the filesystem type requested by the user in "ResourceDisk.Filesystem" is anything other than "ntfs". A single partition of type Linux (83) is made available on the disk. This partition is not formatted if it can be successfully mounted.
+If `y`, waagent formats and mounts the resource disk provided by the platform, unless the file system type requested by the user in `ResourceDisk.Filesystem` is `ntfs`. The agent makes a single Linux partition (ID 83) available on the disk. This partition isn't formatted if it can be successfully mounted.
-**ResourceDisk.Filesystem:**
+### ResourceDisk.Filesystem
```txt Type: String Default: ext4 ```
-This specifies the filesystem type for the resource disk. Supported values vary by Linux distribution. If the string is X, then mkfs.X should be present on the Linux image.
+This option specifies the file system type for the resource disk. Supported values vary by Linux distribution. If the string is `X`, then `mkfs.X` should be present on the Linux image.
-**ResourceDisk.MountPoint:**
+### ResourceDisk.MountPoint
```txt Type: String Default: /mnt/resource ```
-This specifies the path at which the resource disk is mounted. The resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned.
+This option specifies the path at which the resource disk is mounted. The resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned.
-**ResourceDisk.MountOptions:**
+### ResourceDisk.MountOptions
```txt Type: String Default: None ```
-Specifies disk mount options to be passed to the mount -o command. This is a comma-separated list of values, ex. 'nodev,nosuid'. See mount(8) for details.
+Specifies disk mount options to be passed to the `mount -o` command. This value is a comma-separated list of values, for example, `nodev,nosuid`. For more information, see the mount(8) manual page.
-**ResourceDisk.EnableSwap:**
+### ResourceDisk.EnableSwap
```txt Type: Boolean Default: n ```
-If set, a swap file (/swapfile) is created on the resource disk and added to the system swap space.
+If set, the agent creates a swap file, */swapfile*, on the resource disk and adds it to the system swap space.
-**ResourceDisk.SwapSizeMB:**
+### ResourceDisk.SwapSizeMB
```txt Type: Integer Default: 0 ```
-The size of the swap file in megabytes.
+Specifies the size of the swap file in megabytes.
-**Logs.Verbose:**
+### Logs.Verbose
```txt Type: Boolean Default: n ```
-If set, log verbosity is boosted. Waagent logs to `/var/log/waagent.log` and utilizes the system logrotate functionality to rotate logs.
+If set, log verbosity is boosted. Waagent logs to */var/log/waagent.log* and uses the system `logrotate` functionality to rotate logs.
-**OS.EnableRDMA**
+### OS.EnableRDMA
```txt Type: Boolean
Default: n
If set, the agent attempts to install and then load an RDMA kernel driver that matches the version of the firmware on the underlying hardware.
-**OS.RootDeviceScsiTimeout:**
+### OS.RootDeviceScsiTimeout
```txt Type: Integer
Default: 300
This setting configures the SCSI timeout in seconds on the OS disk and data drives. If not set, the system defaults are used.
-**OS.OpensslPath:**
+### OS.OpensslPath
```txt Type: String Default: None ```
-This setting can be used to specify an alternate path for the openssl binary to use for cryptographic operations.
+This setting can be used to specify an alternate path for the *openssl* binary to use for cryptographic operations.
-**HttpProxy.Host, HttpProxy.Port:**
+### HttpProxy.Host, HttpProxy.Port
```txt Type: String
Default: None
If set, the agent uses this proxy server to access the internet.
-**AutoUpdate.Enabled:**
+### AutoUpdate.Enabled
```txt Type: Boolean Default: y ```
-Enable or disable auto-update for goal state processing; default is enabled.
+Enable or disable autoupdate for goal state processing. The default value is `y`.
## Linux guest agent automatic logs collection
-As of version 2.7+, The Azure Linux guest agent has a feature to automatically collect some logs and upload them. This feature currently requires systemd, and utilizes a new systemd slice called azure-walinuxagent-logcollector.slice to manage resources while performing the collection. The log collector's goal is to facilitate offline analysis, and therefore produces a ZIP file of some diagnostics logs before uploading them to the VM's Host. The ZIP file can then be retrieved by Engineering Teams and Support professionals to investigate issues at the behest of the VM owner. More technical information on the files collected by the guest agent can be found in the azurelinuxagent/common/logcollector_manifests.py file in the [agent's GitHub repository](https://github.com/Azure/WALinuxAgent).
+As of version 2.7+, The Azure Linux guest agent has a feature to automatically collect some logs and upload them. This feature currently requires `systemd`, and uses a new `systemd` slice called `azure-walinuxagent-logcollector.slice` to manage resources while it performs the collection.
+
+The purpose is to facilitate offline analysis. The agent produces a *.zip* file of some diagnostics logs before uploading them to the VM's host. Engineering teams and support professionals can retrieve the file to investigate issues for the VM owner. More technical information on the files collected by the guest agent can be found in the *azurelinuxagent/common/logcollector_manifests.py* file in the [agent's GitHub repository](https://github.com/Azure/WALinuxAgent).
-This can be disabled by editing ```/etc/waagent.conf``` updating ```Logs.Collect``` to ```n```
+This option can be disabled by editing */etc/waagent.conf*. Update `Logs.Collect` to `n`.
## Ubuntu Cloud Images
-Ubuntu Cloud Images utilize [cloud-init](https://launchpad.net/ubuntu/+source/cloud-init) to perform many configuration tasks that would otherwise be managed by the Azure Linux Agent. The following differences apply:
+Ubuntu Cloud Images use [cloud-init](https://launchpad.net/ubuntu/+source/cloud-init) to do many configuration tasks that the Azure Linux Agent would otherwise manage. The following differences apply:
-* **Provisioning.Enabled** defaults to "n" on Ubuntu Cloud Images that use cloud-init to perform provisioning tasks.
-* The following configuration parameters have no effect on Ubuntu Cloud Images that use cloud-init to manage the resource disk and swap space:
+- `Provisioning.Enabled` defaults to `n` on Ubuntu Cloud Images that use cloud-init to perform provisioning tasks.
+- The following configuration parameters have no effect on Ubuntu Cloud Images that use cloud-init to manage the resource disk and swap space:
- * **ResourceDisk.Format**
- * **ResourceDisk.Filesystem**
- * **ResourceDisk.MountPoint**
- * **ResourceDisk.EnableSwap**
- * **ResourceDisk.SwapSizeMB**
+ - `ResourceDisk.Format`
+ - `ResourceDisk.Filesystem`
+ - `ResourceDisk.MountPoint`
+ - `ResourceDisk.EnableSwap`
+ - `ResourceDisk.SwapSizeMB`
-* For more information, see the following resources to configure the resource disk mount point and swap space on Ubuntu Cloud Images during provisioning:
+- For more information, see the following resources to configure the resource disk mount point and swap space on Ubuntu Cloud Images during provisioning:
- * [Ubuntu Wiki: Configure Swap Partitions](https://go.microsoft.com/fwlink/?LinkID=532955&clcid=0x409)
- * [Injecting Custom Data into an Azure Virtual Machine](../windows/tutorial-automate-vm-deployment.md)
+ - [Ubuntu Wiki: AzureSwapPartitions](https://go.microsoft.com/fwlink/?LinkID=532955&clcid=0x409)
+ - [Deploy applications to a Windows virtual machine in Azure with the Custom Script Extension](../windows/tutorial-automate-vm-deployment.md)
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
Please switch new and existing deployments to use Version 2. The new version is
## Prerequisites
-### Operating system
-
-The Custom Script Extension for Linux will run on supported operating systems. For more information, see [Endorsed Linux distributions on Azure](../linux/endorsed-distros.md).
+### Linux DistroΓÇÖs Supported
+| **Linux Distro** | **x64** | **ARM64** |
+|:--|:--:|:--:|
+| Alma Linux | 9.x+ | 9.x+ |
+| CentOS | 7.x+, 8.x+ | 7.x+ |
+| Debian | 10+ | 11.x+ |
+| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| openSUSE | 12.3+ | Not Supported |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
+| Rocky Linux | 9.x+ | 9.x+ |
+| SLES | 12.x+, 15.x+ | 15.x SP4+ |
+| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
### Script location
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
This article details how to use the Custom Script Extension by using the Azure P
> [!NOTE] > Don't use the Custom Script Extension to run `Update-AzVM` with the same VM as its parameter, because it will wait for itself.
-### Operating system
-
-The Custom Script Extension for Windows will run on these supported operating systems:
-
-* Windows Server 2008 R2
-* Windows Server 2012
-* Windows Server 2012 R2
-* Windows 10
-* Windows Server 2016
-* Windows Server 2016 Core
-* Windows Server 2019
-* Windows Server 2019 Core
-* Windows Server 2022
-* Windows Server 2022 Core
-* Windows 11
+### **Windows OSΓÇÖ Supported**
+| **Windows OS** | **x64** |
+|:-|:-:|
+| Windows 10 | Supported |
+| Windows 11 | Supported |
+| Windows Server 2008 SP2 | Supported |
+| Windows Server 2008 R2 | Supported |
+| Windows Server 2012 | Supported |
+| Windows Server 2012 R2 | Supported |
+| Windows Server 2016 | Supported |
+| Windows Server 2016 Core | Supported |
+| Windows Server 2019 | Supported |
+| Windows Server 2019 Core | Supported |
+| Windows Server 2022 | Supported |
+| Windows Server 2022 Core | Supported |
### Script location
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-overview.md
Title: Desired State Configuration for Azure overview
-description: Learn how to use the Microsoft Azure extension handler for PowerShell Desired State Configuration (DSC). The article includes prerequisites, architecture, and cmdlets.
+description: Learn how to use the Microsoft Azure extension handler for PowerShell Desired State Configuration (DSC), including prerequisites, architecture, and cmdlets.
vm-windows Previously updated : 03/13/2023 Last updated : 03/28/2023 ms.devlang: azurecli + # Introduction to the Azure Desired State Configuration extension handler
-> [!NOTE]
-> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Automange named [machine configuration](../../governance/machine-configuration/overview.md). The machine configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Machine configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+The Azure Linux Agent for Azure virtual machines (VM) and the associated extensions are part of Microsoft Azure infrastructure services. Azure VM extensions are software components that extend VM functionality and simplify various VM management operations.
+
+The primary use for the Azure Desired State Configuration (DSC) extension for Windows PowerShell is to bootstrap a VM to the
+[Azure Automation State Configuration (DSC) service](../../automation/automation-dsc-overview.md). This service provides [benefits](/powershell/dsc/managing-nodes/metaConfig#pull-service) that include ongoing management of the VM configuration and integration with other operational tools, such as Azure Monitor. You can use the extension to register your VMs to the service and gain a flexible solution that works across Azure subscriptions.
+
+You can run the DSC extension independently of the Automation DSC service, but this method only pushes a configuration to the VM. No ongoing reporting is available, other than locally in the VM. Before you enable the DSC extension, [review the available DSC versions](#available-dsc-versions), and choose the version that supports your configuration requirements.
-The Azure VM Agent and associated extensions are part of Microsoft Azure infrastructure services. VM extensions are software components that extend VM functionality and simplify various VM management operations.
+This article describes how to use the DSC extension for Automation onboarding, or use it as a tool to assign configurations to VMs with the Azure SDK.
-The primary use case for the Azure Desired State Configuration (DSC) extension is to bootstrap a VM to the
-[Azure Automation State Configuration (DSC) service](../../automation/automation-dsc-overview.md).
-The service provides [benefits](/powershell/dsc/managing-nodes/metaConfig#pull-service)
-that include ongoing management of the VM configuration and integration with other operational tools, such as Azure Monitoring.
-Using the extension to register VM's to the service provides a flexible solution that even works across Azure subscriptions.
+## Available DSC versions
-You can use the DSC extension independently of the Automation DSC service.
-However, this will only push a configuration to the VM.
-No ongoing reporting is available, other than locally in the VM.
+Several versions of Desired State Configuration are available for implementation. Before you enable the DSC extension, choose the DSC version that best supports your configuration and business goals.
-This article provides information about both scenarios: using the DSC extension for Automation onboarding, and using the DSC extension as a tool for assigning configurations to VMs by using the Azure SDK.
+| Version | Availability | Description |
+| | | |
+| **2.0** | General availability | [Desired State Configuration 2.0](/powershell/dsc/overview?view=dsc-2.0&preserve-view=true) is supported for use with the Azure Automanage [Machine Configuration](../../governance/machine-configuration/overview.md) feature. The machine configuration feature combines features of the DSC extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Machine configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md). |
+| **1.1** | General availability | If your implementation doesn't use the Azure Automanage machine configuration feature, you should choose Desired State Configuration 1.1. For more information, see [PSDesiredStateConfiguration v1.1](/powershell/dsc/overview?view=dsc-1.1&preserve-view=true). |
+| **3.0** | Public preview | [Desired State Configuration 3.0 is available in public beta](/powershell/dsc/overview?view=dsc-3.0&preserve-view=true). This version should be used only with Azure machine configuration, or for nonproduction environments to test migrating away from Desired State Configuration 1.1. |
## Prerequisites -- **Local machine**: To interact with the Azure VM extension, you must use either the Azure portal or the Azure PowerShell SDK.-- **Guest Agent**: The Azure VM that's configured by the DSC configuration must be an OS that supports Windows Management Framework (WMF) 4.0 or later. For the full list of supported OS versions, see the [DSC extension version history](../../automation/automation-dsc-extension-history.md).
+- **Local machine**: To interact with the Azure DSC extension, you must use either the Azure portal or the Azure PowerShell SDK on the local machine.
+
+- **Guest agent**: The Azure VM that's prepared by the DSC configuration must use an operating system that supports Windows Management Framework (WMF) 4.0 or later. For the full list of supported operating system versions, see the [Azure DSC extension version history](../../automation/automation-dsc-extension-history.md).
## Terms and concepts
-This guide assumes familiarity with the following concepts:
+This article assumes familiarity with the following concepts:
+
+- **Configuration** refers to a DSC configuration document.
+
+- **Node** identifies a target for a DSC configuration. In this article, *node* always refers to an Azure VM.
-- **Configuration**: A DSC configuration document.-- **Node**: A target for a DSC configuration. In this document, *node* always refers to an Azure VM.-- **Configuration data**: A .psd1 file that has environmental data for a configuration.
+- **Configuration data** is stored in a PowerShell DSC format file (.psd1) that has environmental data for a configuration.
## Architecture
-The Azure DSC extension uses the Azure VM Agent framework to deliver, enact, and report on DSC configurations running on Azure VMs. The DSC extension accepts a configuration document and a set of parameters. If no file is provided, a [default configuration script](#default-configuration-script) is embedded with the extension. The default configuration script is used only to set metadata in [Local Configuration Manager](/powershell/dsc/managing-nodes/metaConfig).
+The Azure DSC extension uses the Azure Linux Agent framework to deliver, enact, and report on DSC configurations running on Azure VMs. The DSC extension accepts a configuration document and a set of parameters. If no file is provided, a [default configuration script](#default-configuration-script) is embedded with the extension. The default configuration script is used only to set metadata in [Local Configuration Manager](/powershell/dsc/managing-nodes/metaConfig).
-When the extension is called for the first time, it installs a version of WMF by using the following logic:
+When the extension is called the first time, it installs a version of WMF by using the following logic:
-- If the Azure VM OS is Windows Server 2016, no action is taken. Windows Server 2016 already has the latest version of PowerShell installed.-- If the **wmfVersion** property is specified, that version of WMF is installed, unless that version is incompatible with the VM's OS.-- If no **wmfVersion** property is specified, the latest applicable version of WMF is installed.
+- If the Azure VM operating system is Windows Server 2016, no action is taken. Windows Server 2016 already has the latest version of PowerShell installed.
-Installing WMF requires a restart. After restarting, the extension downloads the .zip file that's specified in the **modulesUrl** property, if provided. If this location is in Azure Blob storage, you can specify an SAS token in the **sasToken** property to access the file. After the .zip is downloaded and unpacked, the configuration function defined in **configurationFunction** runs to generate an .mof([Managed Object Format](/windows/win32/wmisdk/managed-object-format--mof-)) file. The extension then runs `Start-DscConfiguration -Force` by using the generated .mof file. The extension captures output and writes it to the Azure status channel.
+- If the `wmfVersion` property is specified, the specified version of WMF is installed, unless the specified version is incompatible with the operating system on the VM.
+
+- If no `wmfVersion` property is specified, the latest applicable version of WMF is installed.
+
+The WMF installation process requires a restart. After you restart, the extension downloads the .zip file that's specified in the `modulesUrl` property, if provided. If this location is in Azure Blob Storage, you can specify an SAS token in the `sasToken` property to access the file. After the .zip downloads and unpacks, the configuration function defined in `configurationFunction` runs to generate a [Managed Object Format (MOF)](/windows/win32/wmisdk/managed-object-format--mof-) file (.mof). The extension then runs the `Start-DscConfiguration -Force` command by using the generated .mof file. The extension captures output and writes it to the Azure status channel.
### Default configuration script
-The Azure DSC extension includes a default configuration script that's intended to be used when you onboard a VM to the Azure Automation DSC service. The script parameters are aligned with the configurable properties of [Local Configuration Manager](/powershell/dsc/managing-nodes/metaConfig). For script parameters, see [Default configuration script](dsc-template.md#default-configuration-script) in [Desired State Configuration extension with Azure Resource Manager templates](dsc-template.md). For the full script, see the [Azure quickstart template in GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/demos/azmgmt-demo/nestedtemplates/scripts/UpdateLCMforAAPull.zip).
+The Azure DSC extension includes a default configuration script that's intended to be used when you onboard a VM to the Azure Automation State Configuration service. The script parameters are aligned with the configurable properties of [Local Configuration Manager](/powershell/dsc/managing-nodes/metaConfig). For script parameters, see [Default configuration script](dsc-template.md#default-configuration-script) in [Desired State Configuration extension with Azure Resource Manager (ARM) templates](dsc-template.md). For the full script, see the [Azure Quickstart Template in GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/demos/azmgmt-demo/nestedtemplates/scripts/UpdateLCMforAAPull.zip).
-## Information for registering with Azure Automation State Configuration (DSC) service
+## Azure Automation State Configuration registration
-When using the DSC Extension to register a node with the State Configuration service,
-three values will need to be provided.
+When you use the Azure DSC extension to register a node with the Azure Automation State Configuration service, you provide the following values:
-- RegistrationUrl - the https address of the Azure Automation account-- RegistrationKey - a shared secret used to register nodes with the service-- NodeConfigurationName - the name of the Node Configuration (MOF) to pull from the service to configure the server role
+- `RegistrationUrl`: The https address of the Azure Automation account.
+- `RegistrationKey`: A shared secret that's used to register nodes with the service.
+- `NodeConfigurationName`: The name of the node configuration (MOF) to pull from the service to configure the server role. The value is the name of the node configuration and not the name of the Configuration.
-This information can be seen in the Azure portal or you can use PowerShell.
+You can gather these values from the Azure portal, or run the following commands in Windows PowerShell:
```powershell (Get-AzAutomationRegistrationInfo -ResourceGroupName <resourcegroupname> -AutomationAccountName <accountname>).Endpoint (Get-AzAutomationRegistrationInfo -ResourceGroupName <resourcegroupname> -AutomationAccountName <accountname>).PrimaryKey ```
+### Node configuration name
+
+For the `NodeConfigurationName` parameter, be sure to provide the name of the node configuration and not the Configuration.
+
+The Configuration is defined in a script that's used [to compile the node configuration (MOF file)](../../automation/automation-dsc-compile.md). The name of the node configuration is always the name of the Configuration followed by a period `.` and either `localhost` or a specific computer name.
+ > [!WARNING]
-> For the Node Configuration name, make sure the node configuration exists in Azure State Configuration. If it does not, the extension deployment will return a failure.
+> Make sure the node configuration exists in Azure Automation State Configuration. If this value doesn't exist, the extension deployment returns a failure.
-Make sure you are using the name of the *Node Configuration* and not the Configuration.
-A Configuration is defined in a script that is used
-[to compile the Node Configuration (MOF file)](../../automation/automation-dsc-compile.md).
-The name will always be the Configuration followed by a period `.` and either `localhost` or a specific computer name.
+## ARM template deployment
-## DSC extension in Resource Manager templates
+The most common approach for deploying the DSC extension is to use Azure Resource Manager templates. For more information and for examples of how to include the DSC extension in ARM templates, see [Desired State Configuration extension with ARM templates](dsc-template.md).
-In most scenarios, Resource Manager deployment templates are the expected way to work with the DSC extension. For more information and for examples of how to include the DSC extension in Resource Manager deployment templates, see [Desired State Configuration extension with Azure Resource Manager templates](dsc-template.md).
+## PowerShell cmdlet deployment
-## DSC extension PowerShell cmdlets
+PowerShell cmdlets for managing the DSC extension are ideal for interactive troubleshooting and information-gathering scenarios. You can use the cmdlets to package, publish, and monitor DSC extension deployments. Cmdlets for the DSC extension aren't currently updated to work with the [default configuration script](#default-configuration-script).
-The PowerShell cmdlets that are used to manage the DSC extension are best used in interactive troubleshooting and information-gathering scenarios. You can use the cmdlets to package, publish, and monitor DSC extension deployments. Cmdlets for the DSC extension aren't yet updated to work with the [default configuration script](#default-configuration-script).
+Here are some of the PowerShell cmdlets that are available:
-The **Publish-AzVMDscConfiguration** cmdlet takes in a configuration file, scans it for dependent DSC resources, and then creates a .zip file. The .zip file contains the configuration and DSC resources that are needed to enact the configuration. The cmdlet can also create the package locally by using the *-OutputArchivePath* parameter. Otherwise, the cmdlet publishes the .zip file to blob storage, and then secures it with an SAS token.
+- The **Publish-AzVMDscConfiguration** cmdlet takes in a configuration file, scans it for dependent DSC resources, and then creates a .zip file. The .zip file contains the configuration and DSC resources that are needed to enact the configuration. The cmdlet can also create the package locally by using the `-OutputArchivePath` parameter. Otherwise, the cmdlet publishes the .zip file to Blob Storage, and then secures it with an SAS token.
-The .ps1 configuration script that the cmdlet creates is in the .zip file at the root of the archive folder. The module folder is placed in the archive folder in resources.
+ The PowerShell configuration script (.ps1) created by the cmdlet is in the .zip file at the root of the archive folder. The module folder is placed in the archive folder in resources.
-The **Set-AzVMDscExtension** cmdlet injects the settings that the PowerShell DSC extension requires into a VM configuration object.
+- The **Set-AzVMDscExtension** cmdlet injects the settings that the PowerShell DSC extension requires into a VM configuration object.
-The **Get-AzVMDscExtension** cmdlet retrieves the DSC extension status of a specific VM.
+- The **Get-AzVMDscExtension** cmdlet retrieves the DSC extension status of a specific VM.
-The **Get-AzVMDscExtensionStatus** cmdlet retrieves the status of the DSC configuration that's enacted by the DSC extension handler. This action can be performed on a single VM or on a group of VMs.
+- The **Get-AzVMDscExtensionStatus** cmdlet retrieves the status of the DSC configuration that's enacted by the DSC extension handler. This action can be performed on a single VM or a group of VMs.
-The **Remove-AzVMDscExtension** cmdlet removes the extension handler from a specific VM. This cmdlet does *not* remove the configuration, uninstall WMF, or change the applied settings on the VM. It only removes the extension handler.
+- The **Remove-AzVMDscExtension** cmdlet removes the extension handler from a specific VM. Keep in mind that this cmdlet doesn't remove the configuration, uninstall WMF, or change the applied settings on the VM. The cmdlet only removes the extension handler.
-Important information about Resource Manager DSC extension cmdlets:
+### Important considerations
+
+There are several considerations to keep in mind when working with Azure Resource Manager cmdlets.
- Azure Resource Manager cmdlets are synchronous.-- The *ResourceGroupName*, *VMName*, *ArchiveStorageAccountName*, *Version*, and *Location* parameters are all required.-- *ArchiveResourceGroupName* is an optional parameter. You can specify this parameter when your storage account belongs to a different resource group than the one where the VM is created.-- Use the **AutoUpdate** switch to automatically update the extension handler to the latest version when it's available. This parameter has the potential to cause restarts on the VM when a new version of WMF is released.
-### Get started with cmdlets
+- Several parameters are required, including `ResourceGroupName`, `VMName`, `ArchiveStorageAccountName`, `Version`, and `Location`.
+
+- `ArchiveResourceGroupName` is an optional parameter. Specify this parameter when your storage account belongs to a different resource group than the one where the VM is created.
+
+- Use the `AutoUpdate` switch to automatically update the extension handler to the latest version when it's available. This parameter has the potential to cause restarts on the VM when a new version of WMF is released.
-The Azure DSC extension can use DSC configuration documents to directly configure Azure VMs during deployment. This step doesn't register the node to Automation. The node is *not* centrally managed.
+### Configuration with PowerShell cmdlets
-The following example shows a simple example of a configuration. Save the configuration locally as iisInstall.ps1.
+The Azure DSC extension can use DSC configuration documents to directly configure Azure VMs during deployment. This step doesn't register the node to Automation. Keep in mind that the node isn't centrally managed.
+
+The following code shows a simple example configuration. To work with this example, save this configuration locally as the **iisInstall.ps1** script file.
```powershell configuration IISInstall
configuration IISInstall
} ```
-The following commands place the iisInstall.ps1 script on the specified VM. The commands also execute the configuration, and then report back on status.
+The following PowerShell commands place the iisInstall.ps1 script on the specified VM. The commands also execute the configuration, and then report back on status.
```powershell $resourceGroup = 'dscVmDemo'
Set-AzVMDscExtension -Version '2.76' -ResourceGroupName $resourceGroup -VMName $
## Azure CLI deployment
-The Azure CLI can be used to deploy the DSC extension to an existing virtual machine.
+The Azure CLI can be used to deploy the DSC extension to an existing VM. The following examples show how to deploy a VM on Windows or Linux.
-For a virtual machine running Windows:
+For a VM running Windows, use the following command:
```azurecli az vm extension set \
az vm extension set \
--settings '{}' ```
-For a virtual machine running Linux:
+For a VM running Linux, use the following command:
```azurecli az vm extension set \
az vm extension set \
--settings '{}' ```
-## Azure portal functionality
+## Azure portal deployment
-To set up DSC in the portal:
+To set up the DSC extension in the Azure portal, follow these steps:
1. Go to a VM.
-2. Under **Settings**, select **Extensions**.
-3. In the new page that's created, select **+ Add**, and then select **PowerShell Desired State Configuration**.
-4. Click **Create** at the bottom of the extension information page.
-The portal collects the following input:
+1. Under **Settings**, select **Extensions + Applications**.
+
+1. Under **Extensions**, select **+ Add**.
+
+1. Select **PowerShell Desired State Configuration**, then select **Next**.
+
+1. Configure the following parameters for the DSC extension.
+
+ > [!Note]
+ > If you're working with a [default configuration script](#default-configuration-script), keep in mind that most of the following parameters must be defined directly in the Azure portal rather than through the script.
+
+ - **Configuration Modules or Script**: (Required) Provide the Configuration modules or script file for your VM.
+
+ Configuration modules and scripts require a .ps1 file that has a configuration script or a .zip file with a .ps1 configuration script at the root. If you use a .zip file, all dependent resources must be included in module folders in the .zip file. You can create the .zip file by using the **Publish-AzureVMDscConfiguration -OutputArchivePath** cmdlet that's included in the Azure PowerShell SDK. The .zip file is uploaded to your user Blob Storage and secured by an SAS token.
+
+ - **Module-qualified Name of Configuration**: (Required) Specify this setting to include multiple configuration functions in a single .ps1 script file. For this setting, enter the name of the configuration .ps1 script file followed by a slash `\` and then the name of the configuration function. For example, if the .ps1 script file has the name **configuration.ps1** and the configuration name is **IisInstall**, enter the value `configuration.ps1\IisInstall` for the setting.
-- **Configuration Modules or Script**: This field is mandatory (the form has not been updated for the [default configuration script](#default-configuration-script)). Configuration modules and scripts require a .ps1 file that has a configuration script or a .zip file with a .ps1 configuration script at the root. If you use a .zip file, all dependent resources must be included in module folders in the .zip. You can create the .zip file by using the **Publish-AzureVMDscConfiguration -OutputArchivePath** cmdlet that's included in the Azure PowerShell SDK. The .zip file is uploaded to your user blob storage and secured by an SAS token.
+ - **Configuration Arguments**: If the configuration function takes arguments, enter the values by using the format `argumentName1=value1,argumentName2=value2`. Notice that this format differs from the format that's used to specify configuration arguments in PowerShell cmdlets or ARM templates.
-- **Module-qualified Name of Configuration**: You can include multiple configuration functions in a .ps1 file. Enter the name of the configuration .ps1 script followed by \\ and the name of the configuration function. For example, if your .ps1 script has the name configuration.ps1 and the configuration is **IisInstall**, enter **configuration.ps1\IisInstall**.
+ > [!Note]
+ > The configuration arguments can be defined in a [default configuration script](#default-configuration-script).
-- **Configuration Arguments**: If the configuration function takes arguments, enter them here in the format **argumentName1=value1,argumentName2=value2**. This format is a different format in which configuration arguments are accepted in PowerShell cmdlets or Resource Manager templates.
+ - **Configuration Data PSD1 File**: If your configuration requires a configuration data file in .psd1 format, use this setting to select the data file and upload it to your user Blob Storage. The configuration data file is secured with an SAS token in Blob Storage.
-- **Configuration Data PSD1 File**: If your configuration requires a configuration data file in `.psd1`, use this field to select the data file and upload it to your user blob storage. The configuration data file is secured by an SAS token in blob storage.
+ - **WMF Version**: Specify the version of Windows Management Framework to install on your VM. If you choose **latest**, which is the default value, the system installs the most recent version of WMF. Other possible values include 4.0, 5.0, and 5.1. The possible values are subject to updates.
-- **WMF Version**: Specifies the version of Windows Management Framework (WMF) that should be installed on your VM. Setting this property to latest installs the most recent version of WMF. Currently, the only possible values for this property are 4.0, 5.0, 5.1, and latest. These possible values are subject to updates. The default value is **latest**.
+ - **Data Collection**: Enable this setting if you want the DSC extension to collect telemetry about your VM. For more information, see [Azure DSC extension data collection](https://devblogs.microsoft.com/powershell/azure-dsc-extension-data-collection-2/).
-- **Data Collection**: Determines if the extension will collect telemetry. For more information, see [Azure DSC extension data collection](https://devblogs.microsoft.com/powershell/azure-dsc-extension-data-collection-2/).
+ - **Version**: (Required) Specify the version of the DSC extension to install. For information about versions, see [Azure DSC extension version history](../../automation/automation-dsc-extension-history.md).
-- **Version**: Specifies the version of the DSC extension to install. For information about versions, see [DSC extension version history](../../automation/automation-dsc-extension-history.md).
+ - **Auto Upgrade Minor Version**: This setting maps to the `AutoUpdate` switch in the cmdlets. Configure this setting to enable the DSC extension to automatically update to the latest version during installation. **Yes** instructs the DSC extension handler to use the latest available version. **No** (default) forces installation of the version you specify in the **Version** setting.
-- **Auto Upgrade Minor Version**: This field maps to the **AutoUpdate** switch in the cmdlets and enables the extension to automatically update to the latest version during installation. **Yes** will instruct the extension handler to use the latest available version and **No** will force the **Version** specified to be installed. Selecting neither **Yes** nor **No** is the same as selecting **No**.
+1. After you configure the parameters, select **Review + Create**, and then select **Create**.
-## Logs
+## DSC extension logs
-Logs for the extension are stored in the following location: `C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\<version number>`
+You can view logs for the Azure DSC extension on the VM under `C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\<version number>`.
## Next steps - For more information about PowerShell DSC, go to the [PowerShell documentation center](/powershell/dsc/overview).-- Examine the [Resource Manager template for the DSC extension](dsc-template.md).
+- Examine the [ARM template for the Azure DSC extension](dsc-template.md).
- For more functionality that you can manage by using PowerShell DSC, and for more DSC resources, browse the [PowerShell gallery](https://www.powershellgallery.com/packages?q=DscResource&x=0&y=0).-- For details about passing sensitive parameters into configurations, see [Manage credentials securely with the DSC extension handler](dsc-credentials.md).
+- For details about passing sensitive parameters into configurations, see [Manage credentials securely with the Azure DSC extension handler](dsc-credentials.md).
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
The Network Watcher Agent extension can be configured for the following Linux di
| Distribution | Version | |||
-| Ubuntu | 12+ |
+| Ubuntu | 16+ |
| Debian | 7 and 8 |
-| Red Hat | 6, 7 and 8+ |
-| Oracle Linux | 6.8+, 7 and 8+ |
-| SUSE Linux Enterprise Server | 11, 12 and 15 |
+| Red Hat | 6.10, 7 and 8+ |
+| Oracle Linux | 6.10, 7 and 8+ |
+| SUSE Linux Enterprise Server | 12 and 15 |
| OpenSUSE Leap | 42.3+ |
-| CentOS | 6.5+ and 7 |
-| CoreOS | 899.17.0+ |
+| CentOS | 6.10 and 7 |
+> [!IMPORTANT]
+
+> Keep in consideration Red Hat Enterprise Linux 6.X and Oracle Linux 6.x is already EOL.
+> RHEL 6.10 has available [ELS support](https://www.redhat.com/en/resources/els-datasheet), which [will end on 06/2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
+> Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf), which [will end on 07/2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
### Internet connectivity
virtual-machines Stackify Retrace Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/stackify-retrace-linux.md
The Retrace agent can be run against these Linux distributions
| Distribution | Version | |||
-| Ubuntu | 16.04 LTS, 14.04 LTS, 16.10 and 17.04 |
-| Debian | 7.9+ and 8.2+, 9 |
-| Red Hat | 6.7+, 7.1+ |
-| CentOS | 6.3+, 7.0+ |
+| Ubuntu | 16.04 LTS |
+| Debian | 9 |
+| Red Hat | 6.10, 7.1+ |
+| CentOS | 6.10, 7.0+ |
+> [!IMPORTANT]
+
+> Keep in consideration Red Hat Enterprise Linux 6.X is already EOL.
+> RHEL 6.10 has available [ELS support](https://www.redhat.com/en/resources/els-datasheet), which [will end on 06/2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
### Internet connectivity The Stackify Agent extension for Linux requires that the target virtual machine is connected to the internet.
virtual-machines Vmaccess https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess.md
This article shows you how to use the Azure VMAccess Extension to check or repai
> If you use the VMAccess Extension to reset the password of your VM after installing the AAD Login Extension you will need to rerun the AAD Login Extension to re-enable AAD Login for your machine. ## Prerequisites
-### Operating system
The VM Access extension can be run against these Linux distributions:
-| Distribution | Version |
-|||
-| Ubuntu | 16.04 LTS, 14.04 LTS and 12.04 LTS |
-| Debian | Debian 7.9+, 8.2+ |
-| Red Hat | RHEL 6.7+, 7.1+ |
-| Oracle Linux | 6.4+, 7.0+ |
-| Suse | 11 and 12 |
-| OpenSuse | openSUSE Leap 42.2+ |
-| CentOS | CentOS 6.3+, 7.0+ |
-| CoreOS | 494.4.0+ |
+### Linux DistroΓÇÖs Supported
+| **Linux Distro** | **x64** | **ARM64** |
+|:--|:--:|:--:|
+| Alma Linux | 9.x+ | 9.x+ |
+| CentOS | 7.x+, 8.x+ | 7.x+ |
+| Debian | 10+ | 11.x+ |
+| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| openSUSE | 12.3+ | Not Supported |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
+| Rocky Linux | 9.x+ | 9.x+ |
+| SLES | 12.x+, 15.x+ | 15.x SP4+ |
+| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
## Ways to use the VMAccess Extension There are two ways that you can use the VMAccess Extension on your Linux VMs:
virtual-machines Cloud Init Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloud-init-deep-dive.md
description: Deep dive for understanding provisioning an Azure VM using cloud-in
Previously updated : 07/06/2020 Last updated : 03/29/2023
# Diving deeper into cloud-init
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
To learn more about [cloud-init](https://cloudinit.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) or troubleshoot it at a deeper level, you need to understand how it works. This document highlights the important parts, and explains the Azure specifics.
-When cloud-init is included in a generalized image, and a VM is created from that image, it will process configurations and run through 5 stages during the initial boot. These stages matter, as it shows you at what point cloud-init will apply configurations.
-
+When cloud-init is included in a generalized image, and a VM is created from that image, it processes configurations and run-through five stages during the initial boot. These stages matter, as it shows you at what point cloud-init applies configurations.
## Understand Cloud-Init configuration
-Configuring a VM to run on a platform, means cloud-init needs to apply multiple configurations, as an image consumer, the main configurations you will be interacting with is `User data` (customData), which supports multiple formats. For more information, see [User-Data Formats & cloud-init 21.2 documentation](https://cloudinit.readthedocs.io/en/latest/topics/format.html#user-data-formats). You also have the ability to add and run scripts (/var/lib/cloud/scripts) for additional configuration, below discusses this in more detail.
+
+Configuring a VM to run on a platform, means cloud-init needs to apply multiple configurations, as an image consumer, the main configurations you interact with is `User data` (customData), which supports multiple formats. For more information, see [User-Data Formats & cloud-init 21.2 documentation](https://cloudinit.readthedocs.io/en/latest/topics/format.html#user-data-formats). You also have the ability to add and run scripts (/var/lib/cloud/scripts) for other configuration.
Some configurations are already baked into Azure Marketplace images that come with cloud-init, such as:
-1. **Cloud data source** - cloud-init contains code that can interact with cloud platforms, these are called 'datasources'. When a VM is created from a cloud-init image in [Azure](https://cloudinit.readthedocs.io/en/latest/reference/datasources/azure.html#azure), cloud-init loads the Azure datasource, which will interact with the Azure metadata endpoints to get the VM specific configuration.
-2. **Runtime config** (/run/cloud-init)
-3. **Image config** (/etc/cloud), like `/etc/cloud/cloud.cfg`, `/etc/cloud/cloud.cfg.d/*.cfg`. An example of where this is used in Azure, it is common for the Linux OS images with cloud-init to have an Azure datasource directive, that tells cloud-init what datasource it should use, this saves cloud-init time:
+* **Cloud data source** - cloud-init contains code that can interact with cloud platforms, these codes are called 'datasources'. When a VM is created from a cloud-init image in [Azure](https://cloudinit.readthedocs.io/en/latest/reference/datasources/azure.html#azure), cloud-init loads the Azure datasource, which interacts with the Azure metadata endpoints to get the VM specific configuration.
+* **Runtime config** (/run/cloud-init).
+* **Image config** (/etc/cloud), like `/etc/cloud/cloud.cfg`, `/etc/cloud/cloud.cfg.d/*.cfg`. An example of where this configuration is used in Azure, it's common for the Linux OS images with cloud-init to have an Azure datasource directive that tells cloud-init what datasource it should use, this configuration saves cloud-init time:
```bash
- /etc/cloud/cloud.cfg.d# cat 90_dpkg.cfg
+ sudo cat /etc/cloud/cloud.cfg.d/90_dpkg.cfg
+ ```
+
+ ```output
# to update this file, run dpkg-reconfigure cloud-init datasource_list: [ Azure ] ``` - ## Cloud-init boot stages (processing configuration)
-When provisioning with cloud-init, there are 5 stages of boot, which process configuration, and shown in the logs.
-
-1. [Generator Stage](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#generator): The cloud-init systemd generator starts, and determines that cloud-init should be included in the boot goals, and if so, it enables cloud-init.
+When you are provisioning VMs with cloud-init, there are five stages of boot, which process configuration, and shown in the logs.
-2. [Cloud-init Local Stage](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#local): Here cloud-init will look for the local "Azure" datasource, which will enable cloud-init to interface with Azure, and apply a networking configuration, including fallback.
+1. [Generator Stage](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#generator): The cloud-init systemd generator starts, and determines that cloud-init should be included in the boot goals, and if so, it enables cloud-init.
+2. [Cloud-init Local Stage](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#local): Here cloud-init looks for the local "Azure" datasource, which enables cloud-init to interface with Azure, and apply a networking configuration, including fallback.
+3. [Cloud-init init Stage (Network)](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#network): Networking should be online, and the NIC and route table information should be generated. At this stage, the modules listed in `cloud_init_modules` in `/etc/cloud/cloud.cfg` runs. The VM in Azure is mounted, the ephemeral disk is formatted, the hostname is set, along with other tasks.
-3. [Cloud-init init Stage (Network)](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#network): Networking should be online, and the NIC and route table information should be generated. At this stage, the modules listed in `cloud_init_modules` in /etc/cloud/cloud.cfg will be run. The VM in Azure will be mounted, the ephemeral disk is formatted, the hostname is set, along with other tasks.
+ The following are some of the `cloud_init_modules`:
- These are some of the `cloud_init_modules`:
-
- ```bash
+ ```config
- migrator - seed_random - bootcmd
When provisioning with cloud-init, there are 5 stages of boot, which process con
- update_hostname - ssh ```
-
- After this stage, cloud-init will signal to the Azure platform that the VM has been provisioned successfully. Some modules may have failed, not all module failures will result in a provisioning failure.
-
-4. [Cloud-init Config Stage](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#config): At this stage, the modules in `cloud_config_modules` defined and listed in /etc/cloud/cloud.cfg will be run.
+ After this stage, cloud-init sends a signal to the Azure platform that the VM has been provisioned successfully. Some modules may have failed, not all module failures result in a provisioning failure.
-5. [Cloud-init Final Stage](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#final): At this final stage, the modules in `cloud_final_modules`, listed in /etc/cloud/cloud.cfg, will be run. Here modules that need to be run late in the boot process run, such as package installations and run scripts etc.
+4. [Cloud-init Config Stage](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#config): At this stage, the modules in `cloud_config_modules` defined and listed in `/etc/cloud/cloud`.cfg runs.
+5. [Cloud-init Final Stage](https://cloudinit.readthedocs.io/en/latest/topics/boot.html#final): At this final stage, the modules in `cloud_final_modules`, listed in `/etc/cloud/cloud.cfg`, runs. Here modules that need to be run late in the boot process run, such as package installations and run scripts etc.
- - During this stage, you can run scripts by placing them in the directories under `/var/lib/cloud/scripts`:
- - `per-boot` - scripts within this directory, run on every reboot
- - `per-instance` - scripts within this directory run when a new instance is first booted
- - `per-once` - scripts within this directory run only once
+ - During this stage, you can run scripts by placing them in the directories under `/var/lib/cloud/scripts`:
+ - `per-boot` - scripts within this directory, run on every reboot
+ - `per-instance` - scripts within this directory run when a new instance is first booted
+ - `per-once` - scripts within this directory run only once
## Next steps
virtual-machines Disk Encryption Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-cli-quickstart.md
Previously updated : 01/04/2023 Last updated : 03/29/2023
Create a VM with [az vm create](/cli/azure/vm#az-vm-create). The following examp
az vm create \ --resource-group "myResourceGroup" \ --name "myVM" \
- --image "Canonical:UbuntuServer:16.04-LTS:latest" \
+ --image "Canonical:UbuntuServer:20.04-LTS:latest" \
--size "Standard_D2S_V3"\ --generate-ssh-keys ```
+> [!NOTE]
+> Any [ADE supported Linux image version](/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems) could be used instead of an Ubuntu VM. Replace `Canonical:UbuntuServer:20.04-LTS:latest` accordingly.
+ It takes a few minutes to create the VM and supporting resources. The following example output shows the VM create operation was successful.
-```
+```json
{ "fqdns": "", "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
az vm encryption show --name "myVM" -g "MyResourceGroup"
When encryption is enabled, you will see "EnableEncryption" in the returned output:
-```
+```output
"EncryptionOperation": "EnableEncryption" ```
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
Previously updated : 08/06/2019 Last updated : 03/29/2023 - # Azure Disk Encryption sample scripts for Linux VMs **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article provides sample scripts for preparing pre-encrypted VHDs and other
``` ### Using the Azure Disk Encryption prerequisites PowerShell script+ If you're already familiar with the prerequisites for Azure Disk Encryption, you can use the [Azure Disk Encryption prerequisites PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/AzureDiskEncryptionPreRequisiteSetup.ps1). For an example of using this PowerShell script, see the [Encrypt a VM Quickstart](disk-encryption-powershell-quickstart.md). You can remove the comments from a section of the script, starting at line 211, to encrypt all disks for existing VMs in an existing resource group. The following table shows which parameters can be used in the PowerShell script: - |Parameter|Description|Mandatory?| |||| |$resourceGroupName| Name of the resource group to which the KeyVault belongs to. A new resource group with this name will be created if one doesn't exist.| True|
The following table shows which parameters can be used in the PowerShell script:
### Encrypt or decrypt VMs with an Azure AD app (previous release) - [Enable disk encryption on an existing or running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-linux-vm)-- - [Disable encryption on a running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-running-linux-vm) - Disabling encryption is only allowed on Data volumes for Linux VMs.-- - [Create a new encrypted managed disk from a pre-encrypted VHD/storage blob](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/create-encrypted-managed-disk) - Creates a new encrypted managed disk provided a pre-encrypted VHD and its corresponding encryption settings
The following table shows which parameters can be used in the PowerShell script:
### Prerequisites for OS disk encryption
-* The VM must be using a distribution compatible with OS disk encryption as listed in the [Azure Disk Encryption supported operating systems](disk-encryption-overview.md#supported-vms)
+* The VM must be using a distribution compatible with OS disk encryption as listed in the [Azure Disk Encryption supported operating systems](/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems)
* The VM must be created from the Marketplace image in Azure Resource Manager.
-* Azure VM with at least 4 GB of RAM (recommended size is 7 GB).
+* Azure VM with at least 4 GB of RAM (recommended size is 7 GB). See [Memory requirements](/azure/virtual-machines/linux/disk-encryption-overview#memory-requirements) for further information.
* (For RHEL and CentOS) Disable SELinux. To disable SELinux, see "4.4.2. Disabling SELinux" in the [SELinux User's and Administrator's Guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-working_with_selinux-changing_selinux_modes#sect-Security-Enhanced_Linux-Enabling_and_Disabling_SELinux-Disabling_SELinux) on the VM. * After you disable SELinux, reboot the VM at least once. ### Steps
-1. Create a VM by using one of the distributions specified previously.
- For CentOS 7.2, OS disk encryption is supported via a special image. To use this image, specify "7.2n" as the SKU when you create the VM:
-
- ```powershell
- Set-AzVMSourceImage -VM $VirtualMachine -PublisherName "OpenLogic" -Offer "CentOS" -Skus "7.2n" -Version "latest"
- ```
+1. Create a VM by using one of the distributions specified previously.
2. Configure the VM according to your needs. If you're going to encrypt all the (OS + data) drives, the data drives need to be specified and mountable from /etc/fstab. > [!NOTE] > Use UUID=... to specify data drives in /etc/fstab instead of specifying the block device name (for example, /dev/sdb1). During encryption, the order of drives changes on the VM. If your VM relies on a specific order of block devices, it will fail to mount them after encryption.- 3. Sign out of the SSH sessions.- 4. To encrypt the OS, specify volumeType as **All** or **OS** when you enable encryption. > [!NOTE] > All user-space processes that are not running as `systemd` services should be killed with a `SIGKILL`. Reboot the VM. When you enable OS disk encryption on a running VM, plan on VM downtime. 5. Periodically monitor the progress of encryption by using the instructions in the [next section](#monitoring-os-encryption-progress).- 6. After Get-AzVmDiskEncryptionStatus shows "VMRestartPending", restart your VM either by signing in to it or by using the portal, PowerShell, or CLI.
- ```powershell
+
+ ```azurepowershell-interactive
C:\> Get-AzVmDiskEncryptionStatus -ResourceGroupName $ResourceGroupName -VMName $VMName -ExtensionName $ExtensionName
+ ```
+ ```output
OsVolumeEncrypted : VMRestartPending DataVolumesEncrypted : NotMounted OsVolumeEncryptionSettings : Microsoft.Azure.Management.Compute.Models.DiskEncryptionSettings ProgressMessage : OS disk successfully encrypted, reboot the VM ```+ Before you reboot, we recommend that you save [boot diagnostics](https://azure.microsoft.com/blog/boot-diagnostics-for-virtual-machines-v2/) of the VM. ## Monitoring OS encryption progress+ You can monitor OS encryption progress in three ways: * Use the `Get-AzVmDiskEncryptionStatus` cmdlet and inspect the ProgressMessage field:
- ```powershell
+
+ ```azurepowershell-interactive
+ Get-AzVMDiskEncryptionStatus -ResourceGroupName $_.ResourceGroupName -VMName $_.Name
+ ```
+
+ ```output
OsVolumeEncrypted : EncryptionInProgress DataVolumesEncrypted : NotMounted OsVolumeEncryptionSettings : Microsoft.Azure.Management.Compute.Models.DiskEncryptionSettings ProgressMessage : OS disk encryption started ```+ After the VM reaches "OS disk encryption started", it takes about 40 to 50 minutes on a Premium-storage backed VM. Because of [issue #388](https://github.com/Azure/WALinuxAgent/issues/388) in WALinuxAgent, `OsVolumeEncrypted` and `DataVolumesEncrypted` show up as `Unknown` in some distributions. With WALinuxAgent version 2.1.5 and later, this issue is fixed automatically. If you see `Unknown` in the output, you can verify disk-encryption status by using the Azure Resource Explorer.
You can monitor OS encryption progress in three ways:
We recommend that you don't sign-in to the VM while OS encryption is in progress. Copy the logs only when the other two methods have failed. ## Prepare a pre-encrypted Linux VHD
-The preparation for pre-encrypted VHDs can vary depending on the distribution. Examples on preparing Ubuntu 16, openSUSE 13.2, and CentOS 7 are available.
-### Ubuntu 16
+The preparation for pre-encrypted VHDs can vary depending on the distribution. Examples on preparing Ubuntu, openSUSE, and CentOS 7 are available.
+
+# [Ubuntu](#tab/ubuntu)
+ Configure encryption during the distribution installation by doing the following steps: 1. Select **Configure encrypted volumes** when you partition the disks.
Configure encryption during the distribution installation by doing the following
Configure encryption to work with Azure by doing the following steps:
-1. Create a file under /usr/local/sbin/azure_crypt_key.sh, with the content in the following script. Pay attention to the KeyFileName, because it's the passphrase file name used by Azure.
+1. Create a file under `/usr/local/sbin/azure_crypt_key.sh`, with the content in the following script. Pay attention to the KeyFileName, because it's the passphrase file name used by Azure.
```bash #!/bin/sh
Configure encryption to work with Azure by doing the following steps:
``` 2. Change the crypt config in */etc/crypttab*. It should look like this:
- ```
+
+ ```config
xxx_crypt uuid=xxxxxxxxxxxxxxxxxxxxx none luks,discard,keyscript=/usr/local/sbin/azure_crypt_key.sh ``` 4. Add executable permissions to the script:+
+ ```bash
+ sudo chmod +x /usr/local/sbin/azure_crypt_key.sh
```
- chmod +x /usr/local/sbin/azure_crypt_key.sh
- ```
-5. Edit */etc/initramfs-tools/modules* by appending lines:
- ```
+
+5. Edit `/etc/initramfs-tools/modules` by appending lines:
+
+ ```config
vfat ntfs nls_cp437 nls_utf8 nls_iso8859-1 ```+ 6. Run `update-initramfs -u -k all` to update the initramfs to make the `keyscript` take effect. 7. Now you can deprovision the VM.
Configure encryption to work with Azure by doing the following steps:
8. Continue to the next step and upload your VHD into Azure.
-### openSUSE 13.2
+# [openSUSE](#tab/opensuse)
+ To configure encryption during the distribution installation, do the following steps:+ 1. When you partition the disks, select **Encrypt Volume Group**, and then enter a password. This is the password that you'll upload to your key vault. ![openSUSE 13.2 Setup - Encrypt Volume Group](./media/disk-encryption/opensuse-encrypt-fig1.png)
To configure encryption during the distribution installation, do the following s
3. Prepare the VM for uploading to Azure by following the instructions in [Prepare a SLES or openSUSE virtual machine for Azure](./suse-create-upload-vhd.md?toc=/azure/virtual-machines/linux/toc.json#prepare-opensuse-152). Don't run the last step (deprovisioning the VM) yet. To configure encryption to work with Azure, do the following steps:
-1. Edit the /etc/dracut.conf, and add the following line:
- ```
+
+1. Edit the `/etc/dracut.conf`, and add the following line:
+
+ ```config
add_drivers+=" vfat ntfs nls_cp437 nls_iso8859-1" ```
-2. Comment out these lines by the end of the file /usr/lib/dracut/modules.d/90crypt/module-setup.sh:
+
+2. Comment out these lines by the end of the file `/usr/lib/dracut/modules.d/90crypt/module-setup.sh`:
+ ```bash # inst_multiple -o \ # $systemdutildir/system-generators/systemd-cryptsetup-generator \
To configure encryption to work with Azure, do the following steps:
# inst_script "$moddir"/crypt-run-generator.sh /sbin/crypt-run-generator ```
-3. Append the following line at the beginning of the file /usr/lib/dracut/modules.d/90crypt/parse-crypt.sh:
+3. Append the following line at the beginning of the file `/usr/lib/dracut/modules.d/90crypt/parse-crypt.sh`:
+ ```bash DRACUT_SYSTEMD=0 ```+ And change all occurrences of:+ ```bash if [ -z "$DRACUT_SYSTEMD" ]; then ```+ to:+ ```bash if [ 1 ]; then ```
-4. Edit /usr/lib/dracut/modules.d/90crypt/cryptroot-ask.sh and append it to "# Open LUKS device":
+
+4. Edit `/usr/lib/dracut/modules.d/90crypt/cryptroot-ask.sh` and append it to "# Open LUKS device":
```bash MountPoint=/tmp-keydisk-mount
To configure encryption to work with Azure, do the following steps:
fi done ```+ 5. Run `/usr/sbin/dracut -f -v` to update the initrd. 6. Now you can deprovision the VM and upload your VHD into Azure.
-### CentOS 7 and RHEL 7
+# [CentOS 7 and RHEL 7](#tab/rhel)
To configure encryption during the distribution installation, do the following steps:+ 1. Select **Encrypt my data** when you partition disks. ![CentOS 7 Setup -Installation destination](./media/disk-encryption/centos-encrypt-fig1.png)
To configure encryption during the distribution installation, do the following s
To configure encryption to work with Azure, do the following steps: 1. Edit the /etc/dracut.conf, and add the following line:
- ```
+
+ ```config
add_drivers+=" vfat ntfs nls_cp437 nls_iso8859-1" ``` 2. Comment out these lines by the end of the file /usr/lib/dracut/modules.d/90crypt/module-setup.sh:+ ```bash # inst_multiple -o \ # $systemdutildir/system-generators/systemd-cryptsetup-generator \
To configure encryption to work with Azure, do the following steps:
``` 3. Append the following line at the beginning of the file /usr/lib/dracut/modules.d/90crypt/parse-crypt.sh:+ ```bash DRACUT_SYSTEMD=0 ```+ And change all occurrences of:+ ```bash if [ -z "$DRACUT_SYSTEMD" ]; then ```+ to+ ```bash if [ 1 ]; then ```
-4. Edit /usr/lib/dracut/modules.d/90crypt/cryptroot-ask.sh and append the following after the "# Open LUKS device":
+
+4. Edit `/usr/lib/dracut/modules.d/90crypt/cryptroot-ask.sh` and append the following after the "# Open LUKS device":
+ ```bash MountPoint=/tmp-keydisk-mount KeyFileName=LinuxPassPhraseFileName
To configure encryption to work with Azure, do the following steps:
fi done ```
-5. Run the "/usr/sbin/dracut -f -v" to update the initrd.
+
+5. Run the `/usr/sbin/dracut -f -v` to update the initrd.
![CentOS 7 Setup - run /usr/sbin/dracut -f -v](./media/disk-encryption/centos-encrypt-fig5.png) ++ ## Upload encrypted VHD to an Azure storage account+ After DM-Crypt encryption is enabled, the local encrypted VHD needs to be uploaded to your storage account.+ ```powershell Add-AzVhd [-Destination] <Uri> [-LocalFilePath] <FileInfo> [[-NumberOfUploaderThreads] <Int32> ] [[-BaseImageUriToPatch] <Uri> ] [[-OverWrite]] [ <CommonParameters>] ```+ ## Upload the secret for the pre-encrypted VM to your key vault+ When encrypting using an Azure AD app (previous release), the disk-encryption secret that you obtained previously must be uploaded as a secret in your key vault. The key vault needs to have disk encryption and permissions enabled for your Azure AD client.
-```powershell
+```azurepowershell-interactive
$AadClientId = "My-AAD-Client-Id" $AadClientSecret = "My-AAD-Client-Secret"
When encrypting using an Azure AD app (previous release), the disk-encryption se
``` ### Disk encryption secret not encrypted with a KEK+ To set up the secret in your key vault, use [Set-AzKeyVaultSecret](/powershell/module/az.keyvault/set-azkeyvaultsecret). The passphrase is encoded as a base64 string and then uploaded to the key vault. In addition, make sure that the following tags are set when you create the secret in the key vault.
-```powershell
+```azurepowershell-interactive
# This is the passphrase that was provided for encryption during the distribution installation $passphrase = "contoso-password"
To set up the secret in your key vault, use [Set-AzKeyVaultSecret](/powershell/m
$secretUrl = $secret.Id ``` - Use the `$secretUrl` in the next step for [attaching the OS disk without using KEK](#without-using-a-kek). ### Disk encryption secret encrypted with a KEK+ Before you upload the secret to the key vault, you can optionally encrypt it by using a key encryption key. Use the wrap [API](/rest/api/keyvault/keys/wrap-key) to first encrypt the secret using the key encryption key. The output of this wrap operation is a base64 URL encoded string, which you can then upload as a secret by using the [`Set-AzKeyVaultSecret`](/powershell/module/az.keyvault/set-azkeyvaultsecret) cmdlet.
-```powershell
+```azurepowershell-interactive
# This is the passphrase that was provided for encryption during the distribution installation $passphrase = "contoso-password"
Before you upload the secret to the key vault, you can optionally encrypt it by
Use `$KeyEncryptionKey` and `$secretUrl` in the next step for [attaching the OS disk using KEK](#using-a-kek).
-## Specify a secret URL when you attach an OS disk
+## Specify a secret URL when you attach an OS disk
-### Without using a KEK
+### Without using a KEK
While you're attaching the OS disk, you need to pass `$secretUrl`. The URL was generated in the "Disk-encryption secret not encrypted with a KEK" section.
-```powershell
+
+```powershazurepowershell-interactiveell
Set-AzVMOSDisk ` -VM $VirtualMachine ` -Name $OSDiskName `
While you're attaching the OS disk, you need to pass `$secretUrl`. The URL was g
-DiskEncryptionKeyVaultId $KeyVault.ResourceId ` -DiskEncryptionKeyUrl $SecretUrl ```+ ### Using a KEK+ When you attach the OS disk, pass `$KeyEncryptionKey` and `$secretUrl`. The URL was generated in the "Disk encryption secret encrypted with a KEK" section.
-```powershell
+
+```azurepowershell-interactive
Set-AzVMOSDisk ` -VM $VirtualMachine ` -Name $OSDiskName `
virtual-machines Disks Enable Host Based Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-host-based-encryption-cli.md
description: Use encryption at host to enable end-to-end encryption on your Azur
Previously updated : 03/28/2023 Last updated : 03/29/2023
You must enable the feature for your subscription before you use the EncryptionA
- Execute the following command to register the feature for your subscription
-```azurecli
+```azurecli-interactive
az feature register --namespace Microsoft.Compute --name EncryptionAtHost ```
-
+ - Check that the registration state is **Registered** (takes a few minutes) using the command below before trying out the feature.
-```azurecli
+```azurecli-interactive
az feature show --namespace Microsoft.Compute --name EncryptionAtHost ``` - ### Create resources > [!NOTE]
Once the feature is enabled, you need to set up a DiskEncryptionSet and either a
## Example scripts
-### Create a VM with encryption at host enabled with customer-managed keys.
+### Create a VM with encryption at host enabled with customer-managed keys
-Create a VM with managed disks using the resource URI of the DiskEncryptionSet created earlier to encrypt cache of OS and data disks with customer-managed keys. The temp disks are encrypted with platform-managed keys.
+Create a VM with managed disks using the resource URI of the DiskEncryptionSet created earlier to encrypt cache of OS and data disks with customer-managed keys. The temp disks are encrypted with platform-managed keys.
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmName=yourVMName location=eastus vmSize=Standard_DS2_v2
-image=UbuntuLTS
+image=LinuxImageURN
diskEncryptionSetName=yourDiskEncryptionSetName diskEncryptionSetId=$(az disk-encryption-set show -n $diskEncryptionSetName -g $rgName --query [id] -o tsv)
az vm create -g $rgName \
--data-disk-encryption-sets $diskEncryptionSetId $diskEncryptionSetId ```
-### Create a VM with encryption at host enabled with platform-managed keys.
+### Create a VM with encryption at host enabled with platform-managed keys
-Create a VM with encryption at host enabled to encrypt cache of OS/data disks and temp disks with platform-managed keys.
+Create a VM with encryption at host enabled to encrypt cache of OS/data disks and temp disks with platform-managed keys.
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmName=yourVMName location=eastus vmSize=Standard_DS2_v2
-image=UbuntuLTS
+image=LinuxImageURN
az vm create -g $rgName \ -n $vmName \
az vm create -g $rgName \
--data-disk-sizes-gb 128 128 \ ```
-### Update a VM to enable encryption at host.
+### Update a VM to enable encryption at host
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmName=yourVMName
az vm update -n $vmName \
### Check the status of encryption at host for a VM
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmName=yourVMName
az vm show -n $vmName \
--query [securityProfile.encryptionAtHost] -o tsv ``` -
-### Update a VM to disable encryption at host.
+### Update a VM to disable encryption at host
You must deallocate your VM before you can disable encryption at host.
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmName=yourVMName
az vm update -n $vmName \
--set securityProfile.encryptionAtHost=false ```
-### Create a Virtual Machine Scale Set with encryption at host enabled with customer-managed keys.
+### Create a Virtual Machine Scale Set with encryption at host enabled with customer-managed keys
Create a Virtual Machine Scale Set with managed disks using the resource URI of the DiskEncryptionSet created earlier to encrypt cache of OS and data disks with customer-managed keys. The temp disks are encrypted with platform-managed keys.
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmssName=yourVMSSName location=westus2 vmSize=Standard_DS3_V2
-image=UbuntuLTS
+image=LinuxImageURN
diskEncryptionSetName=yourDiskEncryptionSetName diskEncryptionSetId=$(az disk-encryption-set show -n $diskEncryptionSetName -g $rgName --query [id] -o tsv)
diskEncryptionSetId=$(az disk-encryption-set show -n $diskEncryptionSetName -g $
az vmss create -g $rgName \ -n $vmssName \ --encryption-at-host \image UbuntuLTS \
+--image $image \
--upgrade-policy automatic \ --admin-username azureuser \ --generate-ssh-keys \
az vmss create -g $rgName \
--data-disk-encryption-sets $diskEncryptionSetId $diskEncryptionSetId ```
-### Create a Virtual Machine Scale Set with encryption at host enabled with platform-managed keys.
+### Create a Virtual Machine Scale Set with encryption at host enabled with platform-managed keys
Create a Virtual Machine Scale Set with encryption at host enabled to encrypt cache of OS/data disks and temp disks with platform-managed keys.
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmssName=yourVMSSName location=westus2 vmSize=Standard_DS3_V2
-image=UbuntuLTS
+image=LinuxImageURN
az vmss create -g $rgName \ -n $vmssName \ --encryption-at-host \image UbuntuLTS \
+--image $image \
--upgrade-policy automatic \ --admin-username azureuser \ --generate-ssh-keys \ --data-disk-sizes-gb 64 128 \ ```
-### Update a Virtual Machine Scale Set to enable encryption at host.
+### Update a Virtual Machine Scale Set to enable encryption at host
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmssName=yourVMName
az vmss update -n $vmssName \
### Check the status of encryption at host for a Virtual Machine Scale Set
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmssName=yourVMName
az vmss show -n $vmssName \
--query [virtualMachineProfile.securityProfile.encryptionAtHost] -o tsv ```
-### Update a Virtual Machine Scale Set to disable encryption at host.
+### Update a Virtual Machine Scale Set to disable encryption at host
You can disable encryption at host on your Virtual Machine Scale Set but, this will only affect VMs created after you disable encryption at host. For existing VMs, you must deallocate the VM, [disable encryption at host on that individual VM](#update-a-vm-to-disable-encryption-at-host), then reallocate the VM.
-```azurecli
+```azurecli-interactive
rgName=yourRGName vmssName=yourVMName
When calling the [Resource Skus API](/rest/api/compute/resourceskus/list), check
For the Azure PowerShell module, use the [Get-AzComputeResourceSku](/powershell/module/az.compute/get-azcomputeresourcesku) cmdlet.
-```powershell
+```azurepowershell-interactive
$vmSizes=Get-AzComputeResourceSku | where{$_.ResourceType -eq 'virtualMachines' -and $_.Locations.Contains('CentralUSEUAP')} foreach($vmSize in $vmSizes)
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
The *updated* managed Run Command uses the same VM agent channel to execute scri
- Support for long running (hours/days) scripts - Passing secrets (parameters, passwords) in a secure manner
+## Prerequisites
+
+### Linux DistroΓÇÖs Supported
+| **Linux Distro** | **x64** | **ARM64** |
+|:--|:--:|:--:|
+| Alma Linux | 9.x+ | Not Supported |
+| CentOS | 7.x+, 8.x+ | Not Supported |
+| Debian | 10+ | Not Supported |
+| Flatcar Linux | 3374.2.x+ | Not Supported |
+| openSUSE | 12.3+ | Not Supported |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | Not Supported |
+| Rocky Linux | 9.x+ | Not Supported |
+| SLES | 12.x+, 15.x+ | Not Supported |
+| Ubuntu | 18.04+, 20.04+, 22.04+ | Not Supported |
+ ## Limiting access to Run Command Listing the run commands or showing the details of a command requires the `Microsoft.Compute/locations/runCommands/read` permission on Subscription level. The built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) role and higher levels have this permission.
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
You can access your virtual machines in multiple ways. Run Command can run scrip
This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of network or administrative user configuration.
+## Prerequisites
+
+### Linux DistroΓÇÖs Supported
+| **Linux Distro** | **x64** | **ARM64** |
+|:--|:--:|:--:|
+| Alma Linux | 9.x+ | 9.x+ |
+| CentOS | 7.x+, 8.x+ | 7.x+ |
+| Debian | 10+ | 11.x+ |
+| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| openSUSE | 12.3+ | Not Supported |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | Not Supported |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ |
+| Rocky Linux | 9.x+ | 9.x+ |
+| SLES | 12.x+, 15.x+ | 15.x SP4+ |
+| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
+ ## Restrictions The following restrictions apply when you're using Run Command:
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
With trusted launch and VBS you can enable Windows Defender Credential Guard. Th
Trusted launch is integrated with Microsoft Defender for Cloud to ensure your VMs are properly configured. Microsoft Defender for Cloud will continually assess compatible VMs and issue relevant recommendations. -- **Recommendation to enable Secure Boot** - This Recommendation only applies for VMs that support trusted launch. Mirosoft Defender for Cloud will identify VMs that can enable Secure Boot, but have it disabled. It will issue a low severity recommendation to enable it.
+- **Recommendation to enable Secure Boot** - This Recommendation only applies for VMs that support trusted launch. Microsoft Defender for Cloud will identify VMs that can enable Secure Boot, but have it disabled. It will issue a low severity recommendation to enable it.
- **Recommendation to enable vTPM** - If your VM has vTPM enabled, Microsoft Defender for Cloud can use it to perform Guest Attestation and identify advanced threat patterns. If Microsoft Defender for Cloud identifies VMs that support trusted launch and have vTPM disabled, it will issue a low severity recommendation to enable it. - **Recommendation to install guest attestation extension** - If your VM has secure boot and vTPM enabled but it doesn't have the guest attestation extension installed, Microsoft Defender for Cloud will issue a low severity recommendation to install the guest attestation extension on it. This extension allows Microsoft Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation. - **Attestation health assessment or Boot Integrity Monitoring** - If your VM has Secure Boot and vTPM enabled and attestation extension installed, Microsoft Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as boot integrity monitoring. Microsoft Defender for Cloud issues an assessment, indicating the status of remote attestation.
virtual-machines Update Image Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/update-image-resources.md
Image version:
- Exclusion from latest - End of life date
-If you plan on adding replica regions, don't delete the source managed image. The source managed image is needed for replicating the image version to additional regions.
### [CLI](#tab/cli2)
az sig image-version list-shared \
## Next steps - Create an [image definition and an image version](image-version.md).-- Create a VM from a [generalized](vm-generalized-image-version.md) or [specialized](vm-specialized-image-version.md) image in an Azure Compute Gallery.
+- Create a VM from a [generalized](vm-generalized-image-version.md) or [specialized](vm-specialized-image-version.md) image in an Azure Compute Gallery.
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command-managed.md
The *updated* managed Run Command uses the same VM agent channel to execute scri
- Support for long running (hours/days) scripts - Passing secrets (parameters, passwords) in a secure manner
+## Prerequisites
+
+### **Windows OSΓÇÖ Supported**
+| **Windows OS** | **x64** |
+|:-|:-:|
+| Windows 10 | Supported |
+| Windows 11 | Supported |
+| Windows Server 2008 SP2 | Supported |
+| Windows Server 2008 R2 | Supported |
+| Windows Server 2012 | Supported |
+| Windows Server 2012 R2 | Supported |
+| Windows Server 2016 | Supported |
+| Windows Server 2016 Core | Supported |
+| Windows Server 2019 | Supported |
+| Windows Server 2019 Core | Supported |
+| Windows Server 2022 | Supported |
+| Windows Server 2022 Core | Supported |
+ ## Limiting access to Run Command Listing the run commands or showing the details of a command requires the `Microsoft.Compute/locations/runCommands/read` permission on Subscription Level. The built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) role and higher levels have this permission.
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command.md
ms.devlang: azurecli
The Run Command feature uses the virtual machine (VM) agent to run PowerShell scripts within an Azure Windows VM. You can use these scripts for general machine or application management. They can help you to quickly diagnose and remediate VM access and network issues and get the VM back to a good state. - ## Benefits You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machines/run-command), or [PowerShell](/powershell/module/az.compute/invoke-azvmruncommand) for Windows VMs. This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of improper network or administrative user configuration.
+## Prerequisites
+
+### **Windows OSΓÇÖ Supported**
+| **Windows OS** | **x64** |
+|:-|:-:|
+| Windows 10 | Supported |
+| Windows 11 | Supported |
+| Windows Server 2008 SP2 | Supported |
+| Windows Server 2008 R2 | Supported |
+| Windows Server 2012 | Supported |
+| Windows Server 2012 R2 | Supported |
+| Windows Server 2016 | Supported |
+| Windows Server 2016 Core | Supported |
+| Windows Server 2019 | Supported |
+| Windows Server 2019 Core | Supported |
+| Windows Server 2022 | Supported |
+| Windows Server 2022 Core | Supported |
+ ## Restrictions The following restrictions apply when you're using Run Command:
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
Information on Red Hat support policies for all versions of RHEL can be found on
As of April 2019, Azure offers RHEL images that are connected to Extended Update Support (EUS) repositories by default and RHEL images that come connected to the regular (non-EUS) repositories by default. More details on RHEL EUS are available in Red Hat's [version lifecycle documentation](https://access.redhat.com/support/policy/updates/errata) and [EUS documentation](https://access.redhat.com/articles/rhel-eus). The default behavior of `sudo yum update` will vary depending which RHEL image you provisioned from, as different images are connected to different repositories.
-For a full image list, run `az vm image list --publisher redhat --all` using the Azure CLI.
+For a full image list, run `az vm image list --offer RHEL --all -p RedHat --output table` using the Azure CLI.
### Images connected to non-EUS repositories
If you provision a VM from a RHEL image that is connected to non-EUS repositorie
Images that are connected to non-EUS repositories will not contain a minor version number in the SKU. The SKU is the third element in the URN (full name of the image). For example, all of the following images come attached to non-EUS repositories:
-```text
-RedHat:RHEL:7-LVM:7.4.2018010506
-RedHat:RHEL:7-LVM:7.5.2018081518
-RedHat:RHEL:7-LVM:7.6.2019062414
-RedHat:RHEL:7-RAW:7.4.2018010506
-RedHat:RHEL:7-RAW:7.5.2018081518
-RedHat:RHEL:7-RAW:7.6.2019062120
+```output
+RedHat:RHEL:7-LVM:7.9.2023032012
+RedHat:RHEL:8-LVM:8.7.2023022813
+RedHat:RHEL:9-lvm:9.1.2022112101
+RedHat:rhel-raw:7-raw:7.9.2022040605
+RedHat:rhel-raw:8-raw:8.6.2022052413
+RedHat:rhel-raw:9-raw:9.1.2022112101
```
-Note that the SKUs are either 7-LVM or 7-RAW. The minor version is indicated in the version (fourth element in the URN) of these images.
+Note that the SKUs are either X-LVM or X-RAW. The minor version is indicated in the version (fourth element in the URN) of these images.
### Images connected to EUS repositories
If you provision a VM from a RHEL image that is connected to EUS repositories, y
Images connected to EUS repositories will contain a minor version number in the SKU. For example, all of the following images come attached to EUS repositories:
-```text
-RedHat:RHEL:7.4:7.4.2019062107
-RedHat:RHEL:7.5:7.5.2019062018
-RedHat:RHEL:7.6:7.6.2019062116
+```output
+RedHat:RHEL:7_9:7.9.20230301107
+RedHat:RHEL:8_7:8.7.2023022801
+RedHat:RHEL:9_1:9.1.2022112113
``` ## RHEL EUS and version-locking RHEL VMs
At the time of this writing, EUS support has ended for RHEL <= 7.4. See the "Red
* RHEL 7.5 EUS support ends April 30, 2020 * RHEL 7.6 EUS support ends May 31, 2021 * RHEL 7.7 EUS support ends August 30, 2021
+* RHEL 8.4 EUS support ends May 31, 2023
+* RHEL 8.6 EUS support ends May 31, 2024
+* RHEL 9.0 EUS support ends May 31, 2024
### Switch a RHEL VM 7.x to EUS (version-lock to a specific minor version) Use the following instructions to lock a RHEL 7.x VM to a particular minor release (run as root):
Use the following instructions to lock a RHEL 7.x VM to a particular minor relea
> This only applies for RHEL 7.x versions for which EUS is available. At the time of this writing, this includes RHEL 7.2-7.7. More details are available at the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page. 1. Disable non-EUS repos: ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel7'
+ sudo yum --disablerepo='*' remove 'rhui-azure-rhel7'
``` 1. Add EUS repos: ```bash
- yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7-eus.config' install 'rhui-azure-rhel7-eus'
+ sudo yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7-eus.config' install 'rhui-azure-rhel7-eus'
``` 1. Lock the `releasever` variable (run as root): ```bash
- echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever
+ sudo echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever
``` >[!NOTE]
Use the following instructions to lock a RHEL 8.x VM to a particular minor relea
> This only applies for RHEL 8.x versions for which EUS is available. At the time of this writing, this includes RHEL 8.1-8.2. More details are available at the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page. 1. Disable non-EUS repos: ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel8'
+ sudo yum --disablerepo='*' remove 'rhui-azure-rhel8'
``` 1. Get the EUS repos config file: ```bash
- wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config
+ sudo wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config
``` 1. Add EUS repos: ```bash
- yum --config=rhui-microsoft-azure-rhel8-eus.config install rhui-azure-rhel8-eus
+ sudo yum --config=rhui-microsoft-azure-rhel8-eus.config install rhui-azure-rhel8-eus
``` 1. Lock the `releasever` variable (run as root): ```bash
- echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever
+ sudo echo $(. /etc/os-release && echo $VERSION_ID) > /etc/yum/vars/releasever
``` >[!NOTE] > The above instruction will lock the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 8.1 > /etc/yum/vars/releasever` will lock your RHEL version to RHEL 8.1. >[!NOTE]
- > If there are permission issues to access the releasever, you can edit the file using 'nano /etc/yum/vars/releaseve' and add the image version details and save ('Ctrl+o' then press enter and then 'Ctrl+x').
+ > If there are permission issues to access the releasever, you can edit the file using your favorite editor and add the image version details and save it.
1. Update your RHEL VM ```bash sudo yum update
Use the following instructions to lock a RHEL 8.x VM to a particular minor relea
Run the following as root: 1. Remove the `releasever` file: ```bash
- rm /etc/yum/vars/releasever
+ sudo rm /etc/yum/vars/releasever
``` 1. Disable EUS repos: ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel7-eus'
+ sudo yum --disablerepo='*' remove 'rhui-azure-rhel7-eus'
``` 1. Configure RHEL VM ```bash
- yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install 'rhui-azure-rhel7'
+ sudo yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install 'rhui-azure-rhel7'
``` 1. Update your RHEL VM
Run the following as root:
Run the following as root: 1. Remove the `releasever` file: ```bash
- rm /etc/yum/vars/releasever
+ sudo rm /etc/yum/vars/releasever
``` 1. Disable EUS repos: ```bash
- yum --disablerepo='*' remove 'rhui-azure-rhel8-eus'
+ sudo yum --disablerepo='*' remove 'rhui-azure-rhel8-eus'
``` 1. Get the regular repos config file: ```bash
- wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8.config
+ sudo wget https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8.config
``` 1. Add non-EUS repos: ```bash
- yum --config=rhui-microsoft-azure-rhel8.config install rhui-azure-rhel8
+ sudo yum --config=rhui-microsoft-azure-rhel8.config install rhui-azure-rhel8
``` 1. Update your RHEL VM
RHUI is available in all regions where RHEL on-demand images are available. It c
If you're using a network configuration to further restrict access from RHEL PAYG VMs, make sure the following IPs are allowed for `yum update` to work depending on the environment you're in:
-```
+```output
# Azure Global RHUI 3 13.91.47.76
eastus - 52.142.4.99
australiaeast - 20.248.180.252 southeastasia - 20.24.186.80
-# Azure US Government (To be deprecated after 10th April 2023. For RHUI 4 conections, use public RHUI IPs as provided above)
+# Azure US Government (To be deprecated after 10th April 2023. For RHUI 4 connections, use public RHUI IPs as provided above)
13.72.186.193 13.72.14.155 52.244.249.194
southeastasia - 20.24.186.80
### Update expired RHUI client certificate on a VM
-If you experience RHUI certificate issues from your Azure RHEL PAYG VM, reference the [troubleshooting guidance for RHUI certificate issues in Azure](/troubleshoot/azure/virtual-machines/troubleshoot-linux-rhui-certificate-issues).
+If you experience RHUI certificate issues from your Azure RHEL PAYG VM, reference the [Troubleshooting guidance for RHUI certificate issues in Azure](/troubleshoot/azure/virtual-machines/troubleshoot-linux-rhui-certificate-issues).
This procedure is provided for reference only. RHEL PAYG images already have the
- For RHEL 6: ```bash
- yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel6.config' install 'rhui-azure-rhel6'
+ sudo yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel6.config' install 'rhui-azure-rhel6'
``` - For RHEL 7: ```bash
- yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install 'rhui-azure-rhel7'
+ sudo yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install 'rhui-azure-rhel7'
``` - For RHEL 8:
This procedure is provided for reference only. RHEL PAYG images already have the
``` 1. Save the file and run the following command: ```bash
- dnf --config rhel8.config install 'rhui-azure-rhel8'
+ sudo dnf --config rhel8.config install 'rhui-azure-rhel8'
``` 1. Update your VM ```bash
virtual-network Virtual Networks Viewing And Modifying Hostnames https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-viewing-and-modifying-hostnames.md
Title: Viewing and Modifying Hostnames
-description: How to view and change hostnames for Azure virtual machines, web and worker roles for name resolution
+ Title: View and Modify hostnames
+description: Learn how to view and modify hostnames for your Azure virtual machines by using the Azure portal or a remote connection.
ms.assetid: c668cd8e-4e43-4d05-acc3-db64fa78d828
Previously updated : 05/14/2021 Last updated : 03/29/2023
-# Viewing and modifying hostnames
-To allow your role instances to be referenced by host name, you must set the value for the host name in the service configuration file for each role. You do that by adding the desired host name to the **vmName** attribute of the **Role** element. The value of the **vmName** attribute is used as a base for the host name of each role instance. For example, if **vmName** is *webrole* and there are three instances of that role, the host names of the instances will be *webrole0*, *webrole1*, and *webrole2*. You do not need to specify a host name for virtual machines in the configuration file, because the host name for a virtual machine is populated based on the virtual machine name. For more information about configuring a Microsoft Azure service, see [Azure Service Configuration Schema (.cscfg File)](/previous-versions/azure/reference/ee758710(v=azure.100))
+# View and modify hostnames
-## Viewing hostnames
-You can view the host names of virtual machines and role instances in a cloud service by using any of the tools below.
+The hostname identifies your virtual machine (VM) in the user interface and Azure operations. You first assign the hostname of a VM in the **Virtual machine name** field during the creation process in the Azure portal. After you create a VM, you can view and modify the hostname either through a remote connection or in the Azure portal.
-### Service configuration file
-You can download the service configuration file for a deployed service from the **Configure** blade of the service in the Azure portal. You can then look for the **vmName** attribute for the **Role name** element to see the host name. Keep in mind that this host name is used as a base for the host name of each role instance. For example, if **vmName** is *webrole* and there are three instances of that role, the host names of the instances will be *webrole0*, *webrole1*, and *webrole2*.
+## View hostnames
+You can view the hostname of your VM in a cloud service by using any of the following tools.
+
+### Azure portal
+
+In the Azure portal, go to your VM, and select **Properties** from the left navigation. On the **Properties** page, you can view the hostname under **Computer Name**.
+ ### Remote Desktop
-After you enable Remote Desktop (Windows), Windows PowerShell remoting (Windows), or SSH (Linux and Windows) connections to your virtual machines or role instances, you can view the host name from an active Remote Desktop connection in various ways:
+You can connect to your VM using a remote desktop tool like Remote Desktop (Windows), Windows PowerShell remoting (Windows), SSH (Linux and Windows) or Bastion (Azure portal). You can then view the hostname in a few ways:
-* Type hostname at the command prompt or SSH terminal.
-* Type ipconfig /all at the command prompt (Windows only).
+* Type *hostname* in PowerShell, the command prompt, or SSH terminal.
+* Type *ipconfig /all* in the command prompt (Windows only).
* View the computer name in the system settings (Windows only).
-### Azure Service Management REST API
+### Azure API
From a REST client, follow these instructions:
-1. Ensure that you have a client certificate to connect to the Azure portal. To obtain a client certificate, follow the steps presented in [How to: Download and Import Publish Settings and Subscription Information](/previous-versions/dynamicsnav-2013/dn385850(v=nav.70)).
-2. Set a header entry named x-ms-version with a value of 2013-11-01.
-3. Send a request in the following format: `https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>?embed-detail=true`
-4. Look for the **HostName** element for each **RoleInstance** element.
+1. Ensure that you have an authenticated connection to the Azure portal. Follow the steps presented in [Create an Azure Active Directory application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal).
+2. Send a request in the following format:
+
+ ```http
+ GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}?api-version=2022-11-01`.
+ ```
+
+ For more information on GET requests for virtual machines, see [Virtual Machines - Get](/rest/api/compute/virtual-machines/get).
+3. Look for the **osProfile** and then the **computerName** element to find the host name.
> [!WARNING]
-> You can also view the internal domain suffix for your cloud service from the REST call response by checking the **InternalDnsSuffix** element, or by running ipconfig /all from a command prompt in a Remote Desktop session (Windows), or by running cat /etc/resolv.conf from an SSH terminal (Linux).
+> You can also view the internal domain suffix for your cloud service by running `ipconfig /all` from a command prompt in a remote desktop session (Windows), or by running `cat /etc/resolv.conf` from an SSH terminal (Linux).
> >
-## Modifying a hostname
-You can modify the host name for any virtual machine or role instance by uploading a modified service configuration file, or by renaming the computer from a Remote Desktop session.
+## Modify a hostname
+You can modify the hostname for any VM by renaming the computer from a remote desktop session or by using **Run command** in the Azure portal.
-## Next steps
-[Name Resolution (DNS)](virtual-networks-name-resolution-for-vms-and-role-instances.md)
+From a remote session:
+* For Windows, you can change the hostname from PowerShell by using the [Rename-Computer](/powershell/module/microsoft.powershell.management/rename-computer) command.
+* For Linux, you can change the hostname by using `hostnamectl`.
+
+You can also use run these commands to find the hostname for your VM from the Azure portal by using **Run command**. In the Azure portal, go to your VM, and select **Run command** from the left navigation. From the **Run command** page in the Azure portal:
+* For Windows, select **RunPowerShellScript** and use `Rename-Computer` in the **Run Command Script** pane.
+* For Linux, select **RunShellScript** and use `hostnamectl` in the **Run Command Script** pane.
-[Azure Service Configuration Schema (.cscfg)](/previous-versions/azure/reference/ee758710(v=azure.100))
+The following image shows the **Run command** page in the Azure portal for a Windows VM.
-[Azure Virtual Network Configuration Schema](/previous-versions/azure/reference/jj157100(v=azure.100))
-[Specify DNS settings using network configuration files](/previous-versions/azure/virtual-network/virtual-networks-specifying-a-dns-settings-in-a-virtual-network-configuration-file)
+After you run either `Rename-Computer` or `hostnamectl` on your VM, you need to restart your VM for the hostname to change.
+
+## Azure classic deployment model
+
+The Azure classic deployment model uses a configuration file that you can download and upload to change the host name. To allow your host name to reference your role instances, you must set the value for the host name in the service configuration file for each role. You do that by adding the desired host name to the **vmName** attribute of the **Role** element. The value of the **vmName** attribute is used as a base for the host name of each role instance.
+
+For example, if **vmName** is *webrole* and there are three instances of that role, the host names of the instances are *webrole0*, *webrole1*, and *webrole2*. You don't need to specify a host name for virtual machines in the configuration file, because the host name for a VM is populated based on the virtual machine name. For more information about configuring a Microsoft Azure service, see [Azure Service Configuration Schema (.cscfg File)](/previous-versions/azure/reference/ee758710(v=azure.100))
+
+### Service configuration file
+In the Azure classic deployment model, you can download the service configuration file for a deployed service from the **Configure** pane of the service in the Azure portal. You can then look for the **vmName** attribute for the **Role name** element to see the host name. Keep in mind that this host name is used as a base for the host name of each role instance. For example, if **vmName** is *webrole* and there are three instances of that role, the host names of the instances are *webrole0*, *webrole1*, and *webrole2*. For more information, see [Azure Virtual Network Configuration Schema](/previous-versions/azure/reference/jj157100(v=azure.100))
++
+## Next steps
+* [Name Resolution (DNS)](virtual-networks-name-resolution-for-vms-and-role-instances.md)
+* [Specify DNS settings using network configuration files](/previous-versions/azure/virtual-network/virtual-networks-specifying-a-dns-settings-in-a-virtual-network-configuration-file)
vpn-gateway Nat Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-howto.md
Title: 'Configure NAT on Azure VPN Gateway'
+ Title: 'Configure NAT on VPN Gateway'
-description: Learn how to configure NAT on Azure VPN Gateway.
+description: Learn how to configure NAT for Azure VPN Gateway.
Previously updated : 05/11/2022 Last updated : 03/30/2023
-# How to configure NAT on Azure VPN Gateways
+# How to configure NAT for Azure VPN Gateway
-This article helps you configure NAT (Network Address Translation) on Azure VPN Gateway using the Azure portal.
+This article helps you configure NAT (Network Address Translation) for Azure VPN Gateway using the Azure portal.
## <a name="about"></a>About NAT NAT defines the mechanisms to translate one IP address to another in an IP packet. It's commonly used to connect networks with overlapping IP address ranges. NAT rules or policies on the gateway devices connecting the networks specify the address mappings for the address translation on the networks.
-For more information about NAT support on Azure VPN gateway, see [About NAT on Azure VPN Gateways](nat-overview.md).
+For more information about NAT support for Azure VPN Gateway, see [About NAT and Azure VPN Gateway](nat-overview.md).
> [!IMPORTANT] > * NAT is supported on the the following SKUs: VpnGw2~5, VpnGw2AZ~5AZ.
Verify that you have an Azure subscription. If you don't already have an Azure s
## <a name ="vnet"></a>Part 1: Create VNet and gateways
-In this section, you create a virtual network, VPN gateway, and the local network gateway resources to correspond to the resources shown in [Diagram 1](#diagram).
+In this section, you create a virtual network, a VPN gateway, and the local network gateway resources to correspond to the resources shown in [Diagram 1](#diagram).
To create these resources, use the steps in the [Site-to-Site Tutorial](tutorial-site-to-site-portal.md) article. Complete the following sections of the article, but don't create any connections.
Before you create connections, you must create and save NAT rules on the VPN gat
| Name | Type | Mode | Internal | External | Connection | | | | | | | |
-| VNet | Static | EgressSNAT | 10.0.1.0/24 | 100.0.1.0/24 | Both connections |
+| VNet | Static | EgressSNAT | 10.0.1.0/24 | 100.0.1.0/24 | Both connections |
| Branch_1 | Static | IngressSNAT | 10.0.1.0/24 | 100.0.2.0/24 | Branch 1 connection | | Branch_2 | Static | IngressSNAT | 10.0.1.0/24 | 100.0.3.0/24 | Branch 2 connection | Use the following steps to create all the NAT rules on the VPN gateway. 1. In the Azure portal, navigate to the **Virtual Network Gateway** resource page and select **NAT Rules**.
-1. Using the **NAT rules table** above, fill in the values.
+1. Using the **NAT rules table**, fill in the values.
- :::image type="content" source="./media/nat-howto/nat-rules.png" alt-text="Screenshot showing NAT rules." lightbox="./media/nat-howto/nat-rules.png":::
+ :::image type="content" source="./media/nat-howto/disable-bgp.png" alt-text="Screenshot showing NAT rules." lightbox="./media/nat-howto/disable-bgp.png":::
1. Click **Save** to save the NAT rules to the VPN gateway resource. This operation can take up to 10 minutes to complete. ## <a name ="connections"></a>Part 3: Create connections and link NAT rules
In this section, you create the connections, and then associate the NAT rules wi
### 1. Create connections
-Follow the steps in [Create a site-to-site connection](tutorial-site-to-site-portal.md) article to create the two connections as shown below:
+Follow the steps in [Create a site-to-site connection](tutorial-site-to-site-portal.md) article to create the two connections as shown in the following screenshot:
:::image type="content" source="./media/nat-howto/connections.png" alt-text="Screenshot showing the Connections page." lightbox="./media/nat-howto/connections.png":::
In this step, you associate the NAT rules with each connection resource.
1. Repeat the steps to apply the NAT rules for other connection resources.
-1. If BGP is used, select **Enable BGP Route Translation** in the NAT rules page and click **Save**. Note that the table now shows the connections linked with each NAT rule.
+1. If BGP is used, select **Enable BGP Route Translation** in the NAT rules page and click **Save**. Notice that the table now shows the connections linked with each NAT rule.
- :::image type="content" source="./media/nat-howto/nat-rules-linked.png" alt-text="Screenshot showing Enable BGP." lightbox="./media/nat-howto/nat-rules-linked.png":::
+ :::image type="content" source="./media/nat-howto/enable-bgp.png" alt-text="Screenshot showing Enable BGP." lightbox="./media/nat-howto/enable-bgp.png":::
After completing these steps, you'll have a setup that matches the topology shown in [Diagram 1](#diagram).