Updates from: 03/31/2023 01:19:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
Your Azure Active Directory B2C (Azure AD B2C) directory user profile comes with
Most of the attributes that can be used with Azure AD B2C user profiles are also supported by Microsoft Graph. This article describes supported Azure AD B2C user profile attributes. It also notes those attributes that are not supported by Microsoft Graph, as well as Microsoft Graph attributes that should not be used with Azure AD B2C. > [!IMPORTANT]
-> You should'nt use built-in or extension attributes to store sensitive personal data, such as account credentials, government identification numbers, cardholder data, financial account data, healthcare information, or sensitive background information.
+> You shouldn't use built-in or extension attributes to store sensitive personal data, such as account credentials, government identification numbers, cardholder data, financial account data, healthcare information, or sensitive background information.
You can also integrate with external systems. For example, you can use Azure AD B2C for authentication, but delegate to an external customer relationship management (CRM) or customer loyalty database as the authoritative source of customer data. For more information, see the [remote profile](https://github.com/azure-ad-b2c/samples/tree/master/policies/remote-profile) solution.
active-directory-domain-services Join Centos Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-centos-linux-vm.md
Once the VM is deployed, follow the steps to connect to the VM using SSH.
To make sure that the VM host name is correctly configured for the managed domain, edit the */etc/hosts* file and set the hostname:
-```console
+```bash
sudo vi /etc/hosts ```
In the *hosts* file, update the *localhost* address. In the following example:
Update these names with your own values:
-```console
+```config
127.0.0.1 centos.aaddscontoso.com centos ```
When done, save and exit the *hosts* file using the `:wq` command of the editor.
The VM needs some additional packages to join the VM to the managed domain. To install and configure these packages, update and install the domain-join tools using `yum`:
-```console
+```bash
sudo yum install adcli realmd sssd krb5-workstation krb5-libs oddjob oddjob-mkhomedir samba-common-tools ```
Now that the required packages are installed on the VM, join the VM to the manag
1. Use the `realm discover` command to discover the managed domain. The following example discovers the realm *AADDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
- ```console
+ ```bash
sudo realm discover AADDSCONTOSO.COM ```
Now that the required packages are installed on the VM, join the VM to the manag
Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
- ```console
- kinit contosoadmin@AADDSCONTOSO.COM
+ ```bash
+ sudo kinit contosoadmin@AADDSCONTOSO.COM
``` 1. Finally, join the VM to the managed domain using the `realm join` command. Use the same user account that's a part of the managed domain that you specified in the previous `kinit` command, such as `contosoadmin@AADDSCONTOSO.COM`:
- ```console
+ ```bash
sudo realm join --verbose AADDSCONTOSO.COM -U 'contosoadmin@AADDSCONTOSO.COM' --membership-software=adcli ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. Open the *sshd_conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ssh/sshd_config ``` 1. Update the line for *PasswordAuthentication* to *yes*:
- ```console
+ ```bash
PasswordAuthentication yes ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. To apply the changes and let users sign in using a password, restart the SSH service:
- ```console
+ ```bash
sudo systemctl restart sshd ```
To grant members of the *AAD DC Administrators* group administrative privileges
1. Open the *sudoers* file for editing:
- ```console
+ ```bash
sudo visudo ``` 1. Add the following entry to the end of */etc/sudoers* file. The *AAD DC Administrators* group contains whitespace in the name, so include the backslash escape character in the group name. Add your own domain name, such as *aaddscontoso.com*:
- ```console
+ ```config
# Add 'AAD DC Administrators' group members as admins. %AAD\ DC\ Administrators@aaddscontoso.com ALL=(ALL) NOPASSWD:ALL ```
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *centos.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name.
- ```console
- ssh -l contosoadmin@AADDSCONTOSO.com centos.aaddscontoso.com
+ ```bash
+ sudo ssh -l contosoadmin@AADDSCONTOSO.com centos.aaddscontoso.com
``` 1. When you've successfully connected to the VM, verify that the home directory was initialized correctly:
- ```console
- pwd
+ ```bash
+ sudo pwd
``` You should be in the */home* directory with your own directory that matches the user account. 1. Now check that the group memberships are being resolved correctly:
- ```console
- id
+ ```bash
+ sudo id
``` You should see your group memberships from the managed domain. 1. If you signed in to the VM as a member of the *AAD DC Administrators* group, check that you can correctly use the `sudo` command:
- ```console
+ ```bash
sudo yum update ```
active-directory-domain-services Join Rhel Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-rhel-linux-vm.md
Once the VM is deployed, follow the steps to connect to the VM using SSH.
To make sure that the VM host name is correctly configured for the managed domain, edit the */etc/hosts* file and set the hostname:
-```console
+```bash
sudo vi /etc/hosts ```
In the *hosts* file, update the *localhost* address. In the following example:
Update these names with your own values:
-```console
+```config
127.0.0.1 rhel rhel.aaddscontoso.com ``` When done, save and exit the *hosts* file using the `:wq` command of the editor.
-## Install required packages
-The VM needs some additional packages to join the VM to the managed domain. To install and configure these packages, update and install the domain-join tools using `yum`. There are some differences between RHEL 7.x and RHEL 6.x, so use the appropriate commands for your distro version in the remaining sections of this article.
+# [RHEL 6](#tab/rhel)
-**RHEL 7**
-```console
-sudo yum install realmd sssd krb5-workstation krb5-libs oddjob oddjob-mkhomedir samba-common-tools
-```
+> [!IMPORTANT]
+> Keep in consideration Red Hat Enterprise Linux 6.X and Oracle Linux 6.x is already EOL.
+> RHEL 6.10 has available [ELS support](https://www.redhat.com/en/resources/els-datasheet), which [will end on 06/2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
-**RHEL 6**
+## Install required packages
-```console
+The VM needs some additional packages to join the VM to the managed domain. To install and configure these packages, update and install the domain-join tools using `yum`.
+
+```bash
sudo yum install adcli sssd authconfig krb5-workstation ```- ## Join VM to the managed domain
-Now that the required packages are installed on the VM, join the VM to the managed domain. Again, use the appropriate steps for your RHEL distro version.
-
-### RHEL 7
-
-1. Use the `realm discover` command to discover the managed domain. The following example discovers the realm *AADDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
-
- ```console
- sudo realm discover AADDSCONTOSO.COM
- ```
-
- If the `realm discover` command can't find your managed domain, review the following troubleshooting steps:
-
- * Make sure that the domain is reachable from the VM. Try `ping aaddscontoso.com` to see if a positive reply is returned.
- * Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available.
- * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain.
-
-1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Azure AD](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
-
- Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
-
- ```console
- kinit contosoadmin@AADDSCONTOSO.COM
- ```
-
-1. Finally, join the VM to the managed domain using the `realm join` command. Use the same user account that's a part of the managed domain that you specified in the previous `kinit` command, such as `contosoadmin@AADDSCONTOSO.COM`:
-
- ```console
- sudo realm join --verbose AADDSCONTOSO.COM -U 'contosoadmin@AADDSCONTOSO.COM'
- ```
-
-It takes a few moments to join the VM to the managed domain. The following example output shows the VM has successfully joined to the managed domain:
-
-```output
-Successfully enrolled machine in realm
-```
-
-### RHEL 6
+Now that the required packages are installed on the VM, join the VM to the managed domain.
1. Use the `adcli info` command to discover the managed domain. The following example discovers the realm *ADDDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
- ```console
+ ```bash
sudo adcli info aaddscontoso.com ```- If the `adcli info` command can't find your managed domain, review the following troubleshooting steps: * Make sure that the domain is reachable from the VM. Try `ping aaddscontoso.com` to see if a positive reply is returned.
Successfully enrolled machine in realm
1. First, join the domain using the `adcli join` command, this command also creates the keytab to authenticate the machine. Use a user account that's a part of the managed domain.
- ```console
+ ```bash
sudo adcli join aaddscontoso.com -U contosoadmin ``` 1. Now configure the `/ect/krb5.conf` and create the `/etc/sssd/sssd.conf` files to use the `aaddscontoso.com` Active Directory domain. Make sure that `AADDSCONTOSO.COM` is replaced by your own domain name:
- Open the `/ect/krb5.conf` file with an editor:
+ Open the `/etc/krb5.conf` file with an editor:
- ```console
+ ```bash
sudo vi /etc/krb5.conf ``` Update the `krb5.conf` file to match the following sample:
- ```console
+ ```config
[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log
Successfully enrolled machine in realm
Create the `/etc/sssd/sssd.conf` file:
- ```console
+ ```bash
sudo vi /etc/sssd/sssd.conf ``` Update the `sssd.conf` file to match the following sample:
- ```console
+ ```config
[sssd] services = nss, pam, ssh, autofs config_file_version = 2
Successfully enrolled machine in realm
1. Make sure `/etc/sssd/sssd.conf` permissions are 600 and is owned by root user:
- ```console
+ ```bash
sudo chmod 600 /etc/sssd/sssd.conf sudo chown root:root /etc/sssd/sssd.conf ``` 1. Use `authconfig` to instruct the VM about the AD Linux integration:
- ```console
- sudo authconfig --enablesssd --enablesssdauth --update
+ ```bash
+ sudo authconfig --enablesssd --enablesssd auth --update
``` 1. Start and enable the sssd service:
- ```console
+ ```bash
sudo service sssd start sudo chkconfig sssd on ```
If your VM can't successfully complete the domain-join process, make sure that t
Now check if you can query user AD information using `getent`
-```console
+```bash
sudo getent passwd contosoadmin ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. Open the *sshd_conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ssh/sshd_config ``` 1. Update the line for *PasswordAuthentication* to *yes*:
- ```console
+ ```config
PasswordAuthentication yes ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. To apply the changes and let users sign in using a password, restart the SSH service for your RHEL distro version:
- **RHEL 7**
+ ```bash
+ sudo service sshd restart
+ ```
+
- ```console
- sudo systemctl restart sshd
+# [RHEL 7](#tab/rhel7)
+
+## Install required packages
+
+The VM needs some additional packages to join the VM to the managed domain. To install and configure these packages, update and install the domain-join tools using `yum`.
+
+```bash
+sudo yum install realmd sssd krb5-workstation krb5-libs oddjob oddjob-mkhomedir samba-common-tools
+```
+## Join VM to the managed domain
+
+Now that the required packages are installed on the VM, join the VM to the managed domain. Again, use the appropriate steps for your RHEL distro version.
+
+1. Use the `realm discover` command to discover the managed domain. The following example discovers the realm *AADDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
+
+ ```bash
+ sudo realm discover AADDSCONTOSO.COM
```
- **RHEL 6**
+ If the `realm discover` command can't find your managed domain, review the following troubleshooting steps:
- ```console
- sudo service sshd restart
+ * Make sure that the domain is reachable from the VM. Try `ping aaddscontoso.com` to see if a positive reply is returned.
+ * Check that the VM is deployed to the same, or a peered, virtual network in which the managed domain is available.
+ * Confirm that the DNS server settings for the virtual network have been updated to point to the domain controllers of the managed domain.
+
+1. Now initialize Kerberos using the `kinit` command. Specify a user that's a part of the managed domain. If needed, [add a user account to a group in Azure AD](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md).
+
+ Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
+
+ ```bash
+ sudo kinit contosoadmin@AADDSCONTOSO.COM
+ ```
+
+1. Finally, join the VM to the managed domain using the `realm join` command. Use the same user account that's a part of the managed domain that you specified in the previous `kinit` command, such as `contosoadmin@AADDSCONTOSO.COM`:
+
+ ```bash
+ sudo realm join --verbose AADDSCONTOSO.COM -U 'contosoadmin@AADDSCONTOSO.COM'
+ ```
+
+It takes a few moments to join the VM to the managed domain. The following example output shows the VM has successfully joined to the managed domain:
+
+```output
+Successfully enrolled machine in realm
+```
+
+## Allow password authentication for SSH
+
+By default, users can only sign in to a VM using SSH public key-based authentication. Password-based authentication fails. When you join the VM to a managed domain, those domain accounts need to use password-based authentication. Update the SSH configuration to allow password-based authentication as follows.
+
+1. Open the *sshd_conf* file with an editor:
+
+ ```bash
+ sudo vi /etc/ssh/sshd_config
```
+1. Update the line for *PasswordAuthentication* to *yes*:
+
+ ```bash
+ PasswordAuthentication yes
+ ```
+
+ When done, save and exit the *sshd_conf* file using the `:wq` command of the editor.
+
+1. To apply the changes and let users sign in using a password, restart the SSH service.
+
+ ```bash
+ sudo systemctl restart sshd
+ ```
++ ## Grant the 'AAD DC Administrators' group sudo privileges To grant members of the *AAD DC Administrators* group administrative privileges on the RHEL VM, you add an entry to the */etc/sudoers*. Once added, members of the *AAD DC Administrators* group can use the `sudo` command on the RHEL VM. 1. Open the *sudoers* file for editing:
- ```console
+ ```bash
sudo visudo ``` 1. Add the following entry to the end of */etc/sudoers* file. The *AAD DC Administrators* group contains whitespace in the name, so include the backslash escape character in the group name. Add your own domain name, such as *aaddscontoso.com*:
- ```console
+ ```config
# Add 'AAD DC Administrators' group members as admins. %AAD\ DC\ Administrators@aaddscontoso.com ALL=(ALL) NOPASSWD:ALL ```
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *rhel.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name.
- ```console
- ssh -l contosoadmin@AADDSCONTOSO.com rhel.aaddscontoso.com
+ ```bash
+ sudo ssh -l contosoadmin@AADDSCONTOSO.com rhel.aaddscontoso.com
``` 1. When you've successfully connected to the VM, verify that the home directory was initialized correctly:
- ```console
- pwd
+ ```bash
+ sudo pwd
``` You should be in the */home* directory with your own directory that matches the user account. 1. Now check that the group memberships are being resolved correctly:
- ```console
- id
+ ```bash
+ sudo id
``` You should see your group memberships from the managed domain. 1. If you signed in to the VM as a member of the *AAD DC Administrators* group, check that you can correctly use the `sudo` command:
- ```console
+ ```bash
sudo yum update ```
active-directory-domain-services Join Suse Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-suse-linux-vm.md
Once the VM is deployed, follow the steps to connect to the VM using SSH.
To make sure that the VM host name is correctly configured for the managed domain, edit the */etc/hosts* file and set the hostname:
-```console
+```bash
sudo vi /etc/hosts ```
In the *hosts* file, update the *localhost* address. In the following example:
Update these names with your own values:
-```console
+```config
127.0.0.1 linux-q2gr linux-q2gr.aaddscontoso.com ```
To join the managed domain using **winbind** and the *YaST command line interfac
* Join the domain:
- ```console
+ ```bash
sudo yast samba-client joindomain domain=aaddscontoso.com user=<admin> password=<admin password> machine=<(optional) machine account> ```
To join the managed domain using **winbind** and the *`samba net` command*:
1. Install kerberos client and samba-winbind:
- ```console
+ ```bash
sudo zypper in krb5-client samba-winbind ```
To join the managed domain using **winbind** and the *`samba net` command*:
* /etc/samba/smb.conf
- ```ini
+ ```config
[global] workgroup = AADDSCONTOSO usershare allow guests = NO #disallow guests from sharing
To join the managed domain using **winbind** and the *`samba net` command*:
* /etc/krb5.conf
- ```ini
+ ```config
[libdefaults] default_realm = AADDSCONTOSO.COM clockskew = 300
To join the managed domain using **winbind** and the *`samba net` command*:
* /etc/security/pam_winbind.conf
- ```ini
+ ```config
[global] cached_login = yes krb5_auth = yes
To join the managed domain using **winbind** and the *`samba net` command*:
* /etc/nsswitch.conf
- ```ini
+ ```config
passwd: compat winbind group: compat winbind ``` 3. Check that the date and time in Azure AD and Linux are in sync. You can do this by adding the Azure AD server to the NTP service:
- 1. Add the following line to /etc/ntp.conf:
+ 1. Add the following line to `/etc/ntp.conf`:
- ```console
+ ```config
server aaddscontoso.com ``` 1. Restart the NTP service:
- ```console
+ ```bash
sudo systemctl restart ntpd ``` 4. Join the domain:
- ```console
+ ```bash
sudo net ads join -U Administrator%Mypassword ``` 5. Enable winbind as a login source in the Linux Pluggable Authentication Modules (PAM):
- ```console
- pam-config --add --winbind
+ ```bash
+ config pam-config --add --winbind
``` 6. Enable automatic creation of home directories so that users can log in:
- ```console
- pam-config -a --mkhomedir
+ ```bash
+ sudo pam-config -a --mkhomedir
``` 7. Start and enable the winbind service:
- ```console
+ ```bash
sudo systemctl enable winbind sudo systemctl start winbind ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. Open the *sshd_conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ssh/sshd_config ``` 1. Update the line for *PasswordAuthentication* to *yes*:
- ```console
+ ```config
PasswordAuthentication yes ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. To apply the changes and let users sign in using a password, restart the SSH service:
- ```console
+ ```bash
sudo systemctl restart sshd ```
To grant members of the *AAD DC Administrators* group administrative privileges
1. Open the *sudoers* file for editing:
- ```console
+ ```bash
sudo visudo ``` 1. Add the following entry to the end of */etc/sudoers* file. The *AAD DC Administrators* group contains whitespace in the name, so include the backslash escape character in the group name. Add your own domain name, such as *aaddscontoso.com*:
- ```console
+ ```config
# Add 'AAD DC Administrators' group members as admins. %AAD\ DC\ Administrators@aaddscontoso.com ALL=(ALL) NOPASSWD:ALL ```
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *linux-q2gr.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name.
- ```console
- ssh -l contosoadmin@AADDSCONTOSO.com linux-q2gr.aaddscontoso.com
+ ```bash
+ sudo ssh -l contosoadmin@AADDSCONTOSO.com linux-q2gr.aaddscontoso.com
``` 2. When you've successfully connected to the VM, verify that the home directory was initialized correctly:
- ```console
- pwd
+ ```bash
+ sudo pwd
``` You should be in the */home* directory with your own directory that matches the user account. 3. Now check that the group memberships are being resolved correctly:
- ```console
- id
+ ```bash
+ sudo id
``` You should see your group memberships from the managed domain. 4. If you signed in to the VM as a member of the *AAD DC Administrators* group, check that you can correctly use the `sudo` command:
- ```console
+ ```bash
sudo zypper update ```
active-directory-domain-services Join Ubuntu Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/join-ubuntu-linux-vm.md
Once the VM is deployed, follow the steps to connect to the VM using SSH.
To make sure that the VM host name is correctly configured for the managed domain, edit the */etc/hosts* file and set the hostname:
-```console
+```bash
sudo vi /etc/hosts ```
In the *hosts* file, update the *localhost* address. In the following example:
Update these names with your own values:
-```console
+```config
127.0.0.1 ubuntu.aaddscontoso.com ubuntu ```
The VM needs some additional packages to join the VM to the managed domain. To i
During the Kerberos installation, the *krb5-user* package prompts for the realm name in ALL UPPERCASE. For example, if the name of your managed domain is *aaddscontoso.com*, enter *AADDSCONTOSO.COM* as the realm. The installation writes the `[realm]` and `[domain_realm]` sections in */etc/krb5.conf* configuration file. Make sure that you specify the realm an ALL UPPERCASE:
-```console
+```bash
sudo apt-get update sudo apt-get install krb5-user samba sssd sssd-tools libnss-sss libpam-sss ntp ntpdate realmd adcli ```
For domain communication to work correctly, the date and time of your Ubuntu VM
1. Open the *ntp.conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ntp.conf ``` 1. In the *ntp.conf* file, create a line to add your managed domain's DNS name. In the following example, an entry for *aaddscontoso.com* is added. Use your own DNS name:
- ```console
+ ```config
server aaddscontoso.com ```
For domain communication to work correctly, the date and time of your Ubuntu VM
Run the following commands to complete these steps. Use your own DNS name with the `ntpdate` command:
- ```console
+ ```bash
sudo systemctl stop ntp sudo ntpdate aaddscontoso.com sudo systemctl start ntp
Now that the required packages are installed on the VM and NTP is configured, jo
1. Use the `realm discover` command to discover the managed domain. The following example discovers the realm *AADDSCONTOSO.COM*. Specify your own managed domain name in ALL UPPERCASE:
- ```console
+ ```bash
sudo realm discover AADDSCONTOSO.COM ```
Now that the required packages are installed on the VM and NTP is configured, jo
Again, the managed domain name must be entered in ALL UPPERCASE. In the following example, the account named `contosoadmin@aaddscontoso.com` is used to initialize Kerberos. Enter your own user account that's a part of the managed domain:
- ```console
- kinit -V contosoadmin@AADDSCONTOSO.COM
+ ```bash
+ sudo kinit -V contosoadmin@AADDSCONTOSO.COM
``` 1. Finally, join the VM to the managed domain using the `realm join` command. Use the same user account that's a part of the managed domain that you specified in the previous `kinit` command, such as `contosoadmin@AADDSCONTOSO.COM`:
- ```console
+ ```bash
sudo realm join --verbose AADDSCONTOSO.COM -U 'contosoadmin@AADDSCONTOSO.COM' --install=/ ```
If your VM can't successfully complete the domain-join process, make sure that t
If you received the error *Unspecified GSS failure. Minor code may provide more information (Server not found in Kerberos database)*, open the file */etc/krb5.conf* and add the following code in `[libdefaults]` section and try again:
-```console
+```config
rdns=false ```
One of the packages installed in a previous step was for System Security Service
1. Open the *sssd.conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/sssd/sssd.conf ``` 1. Comment out the line for *use_fully_qualified_names* as follows:
- ```console
+ ```config
# use_fully_qualified_names = True ```
One of the packages installed in a previous step was for System Security Service
1. To apply the change, restart the SSSD service:
- ```console
+ ```bash
sudo systemctl restart sssd ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. Open the *sshd_conf* file with an editor:
- ```console
+ ```bash
sudo vi /etc/ssh/sshd_config ``` 1. Update the line for *PasswordAuthentication* to *yes*:
- ```console
+ ```config
PasswordAuthentication yes ```
By default, users can only sign in to a VM using SSH public key-based authentica
1. To apply the changes and let users sign in using a password, restart the SSH service:
- ```console
+ ```bash
sudo systemctl restart ssh ```
By default, users can only sign in to a VM using SSH public key-based authentica
To enable automatic creation of the home directory when a user first signs in, complete the following steps:
-1. Open the */etc/pam.d/common-session* file in an editor:
+1. Open the `/etc/pam.d/common-session` file in an editor:
- ```console
+ ```bash
sudo vi /etc/pam.d/common-session ``` 1. Add the following line in this file below the line `session optional pam_sss.so`:
- ```console
+ ```config
session required pam_mkhomedir.so skel=/etc/skel/ umask=0077 ```
To grant members of the *AAD DC Administrators* group administrative privileges
1. Open the *sudoers* file for editing:
- ```console
+ ```bash
sudo visudo ``` 1. Add the following entry to the end of */etc/sudoers* file:
- ```console
+ ```config
# Add 'AAD DC Administrators' group members as admins. %AAD\ DC\ Administrators ALL=(ALL) NOPASSWD:ALL ```
To verify that the VM has been successfully joined to the managed domain, start
1. Create a new SSH connection from your console. Use a domain account that belongs to the managed domain using the `ssh -l` command, such as `contosoadmin@aaddscontoso.com` and then enter the address of your VM, such as *ubuntu.aaddscontoso.com*. If you use the Azure Cloud Shell, use the public IP address of the VM rather than the internal DNS name.
- ```console
- ssh -l contosoadmin@AADDSCONTOSO.com ubuntu.aaddscontoso.com
+ ```bash
+ sudo ssh -l contosoadmin@AADDSCONTOSO.com ubuntu.aaddscontoso.com
``` 1. When you've successfully connected to the VM, verify that the home directory was initialized correctly:
- ```console
- pwd
+ ```bash
+ sudo pwd
``` You should be in the */home* directory with your own directory that matches the user account. 1. Now check that the group memberships are being resolved correctly:
- ```console
- id
+ ```bash
+ sudo id
``` You should see your group memberships from the managed domain. 1. If you signed in to the VM as a member of the *AAD DC Administrators* group, check that you can correctly use the `sudo` command:
- ```console
+ ```bash
sudo apt-get update ```
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 03/28/2023 Last updated : 03/29/2023
Use the steps in the example to provision roles for a user to your application.
![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png) - **Things to consider**
- - Ensure that multiple roles aren't assigned to a user. There is no guarantee which role is provisioned.
+ - Ensure that multiple roles aren't assigned to a user. There's no guarantee which role is provisioned.
- SingleAppRoleAssignments isn't compatible with setting scope to "Sync All users and groups." - **Example request (POST)**
Certain attributes such as phoneNumbers and emails are multi-value attributes wh
## Restoring the default attributes and attribute-mappings
-Should you need to start over and reset your existing mappings back to their default state, you can select the **Restore default mappings** check box and save the configuration. Doing so sets all mappings and scoping filters as if the application was just added to your Azure AD tenant from the application gallery.
+Should you need to start over and reset your existing mappings back to their default state, you can select the **Restore default mappings** check box and save the configuration. Doing so sets all mappings and scoping filters as if the application was added to your Azure AD tenant from the application gallery.
-Selecting this option will effectively force a resynchronization of all users while the provisioning service is running.
+Selecting this option forces a resynchronization of all users while the provisioning service is running.
> [!IMPORTANT] > We strongly recommend that **Provisioning status** be set to **Off** before invoking this option.
Selecting this option will effectively force a resynchronization of all users wh
- Updating attribute-mappings has an impact on the performance of a synchronization cycle. An update to the attribute-mapping configuration requires all managed objects to be reevaluated. - A recommended best practice is to keep the number of consecutive changes to your attribute-mappings at a minimum. - Adding a photo attribute to be provisioned to an app isn't supported today as you can't specify the format to sync the photo. You can request the feature on [User Voice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)-- The attribute IsSoftDeleted is often part of the default mappings for an application. IsSoftdeleted can be true in one of four scenarios (the user is out of scope due to being unassigned from the application, the user is out of scope due to not meeting a scoping filter, the user has been soft deleted in Azure AD, or the property AccountEnabled is set to false on the user). It's not recommended to remove the IsSoftDeleted attribute from your attribute mappings.
+- The attribute `IsSoftDeleted` is often part of the default mappings for an application. `IsSoftdeleted` can be true in one of four scenarios: 1) The user is out of scope due to being unassigned from the application. 2) The user is out of scope due to not meeting a scoping filter. 3) The user has been soft deleted in Azure AD. 4) The property `AccountEnabled` is set to false on the user. It's not recommended to remove the `IsSoftDeleted` attribute from your attribute mappings.
- The Azure AD provisioning service doesn't support provisioning null values. - They primary key, typically "ID", shouldn't be included as a target attribute in your attribute mappings. - The role attribute typically needs to be mapped using an expression, rather than a direct mapping. For more information about role mapping, see [Provisioning a role to a SCIM app](#Provisioning a role to a SCIM app).
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 02/13/2023 Last updated : 03/30/2023
To request an automatic Azure AD provisioning connector for an app that doesn't
## Authorization
-Credentials are required for Azure AD to connect to the application's user management API. While you're configuring automatic user provisioning for an application, you need to enter valid credentials. For gallery applications, you can find credential types and requirements for the application by referring to the app tutorial. For non-gallery applications, you can refer to the [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery) documentation to understand the credential types and requirements. In the Azure portal, you are able to test the credentials by having Azure AD attempt to connect to the app's provisioning app using the supplied credentials.
+Credentials are required for Azure AD to connect to the application's user management API. While you're configuring automatic user provisioning for an application, you need to enter valid credentials. For gallery applications, you can find credential types and requirements for the application by referring to the app tutorial. For non-gallery applications, you can refer to the [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery) documentation to understand the credential types and requirements. In the Azure portal, you're able to test the credentials by having Azure AD attempt to connect to the app's provisioning app using the supplied credentials.
## Mapping attributes
After the initial cycle, all other cycles will:
The provisioning service continues running back-to-back incremental cycles indefinitely, at intervals defined in the [tutorial specific to each application](../saas-apps/tutorial-list.md). Incremental cycles continue until one of the following events occurs: - The service is manually stopped using the Azure portal, or using the appropriate Microsoft Graph API command.-- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. This action clears any stored watermark and causes all source objects to be evaluated again. This will not break the links between source and target objects. To break the links use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the following request:
+- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. This action clears any stored watermark and causes all source objects to be evaluated again. This won't break the links between source and target objects. To break the links use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the following request:
<!-- { "blockType": "request",
The provisioning service supports both deleting and disabling (sometimes referre
**Configure your application to disable a user**
-Ensure that you have selected the checkbox for updates.
+Confirm the checkobx for updates is selected.
-Ensure that you have the mapping for *active* for your application. If your using an application from the app gallery, the mapping may be slightly different. Please ensure that you use the default / out of the box mapping for gallery applications.
+Confirm the mapping for *active* for your application. If your using an application from the app gallery, the mapping may be slightly different. In this case, use the default mapping for gallery applications.
:::image type="content" source="./media/how-provisioning-works/disable-user.png" alt-text="Disable a user" lightbox="./media/how-provisioning-works/disable-user.png":::
Ensure that you have the mapping for *active* for your application. If your usin
The following scenarios will trigger a disable or a delete: * A user is soft deleted in Azure AD (sent to the recycle bin / AccountEnabled property set to false).
- 30 days after a user is deleted in Azure AD, they will be permanently deleted from the tenant. At this point, the provisioning service will send a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application.
+ 30 days after a user is deleted in Azure AD, they're permanently deleted from the tenant. At this point, the provisioning service sends a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application.
* A user is permanently deleted / removed from the recycle bin in Azure AD. * A user is unassigned from an app. * A user goes from in scope to out of scope (doesn't pass a scoping filter anymore).
The following scenarios will trigger a disable or a delete:
By default, the Azure AD provisioning service soft deletes or disables users that go out of scope. If you want to override this default behavior, you can set a flag to [skip out-of-scope deletions.](skip-out-of-scope-deletions.md)
-If one of the above four events occurs and the target application does not support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
+If one of the above four events occurs and the target application doesn't support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
-If you see an attribute IsSoftDeleted in your attribute mappings, it is used to determine the state of the user and whether to send an update request with active = false to soft delete the user.
+If you see an attribute IsSoftDeleted in your attribute mappings, it's used to determine the state of the user and whether to send an update request with active = false to soft delete the user.
**Deprovisioning events**
The following table describes how you can configure deprovisioning actions with
|--|--| |If a user is unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, do nothing.|Remove isSoftDeleted from the attribute mappings and / or set the [skip out of scope deletions](skip-out-of-scope-deletions.md) property to true.| |If a user is unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, set a specific attribute to true / false.|Map isSoftDeleted to the attribute that you would like to set to false.|
-|When a user is disabled in Azure AD, unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, send a DELETE request to the target application.|This is currently supported for a limited set of gallery applications where the functionality is required. It is not configurable by customers.|
-|When a user is deleted in Azure AD, do nothing in the target application.|Ensure that "Delete" is not selected as one of the target object actions in the [attriubte configuration experience](skip-out-of-scope-deletions.md).|
+|When a user is disabled in Azure AD, unassigned from an app, soft-deleted in Azure AD, or blocked from sign-in, send a DELETE request to the target application.|This is currently supported for a limited set of gallery applications where the functionality is required. It's not configurable by customers.|
+|When a user is deleted in Azure AD, do nothing in the target application.|Ensure that "Delete" isn't selected as one of the target object actions in the [attriubte configuration experience](skip-out-of-scope-deletions.md).|
|When a user is deleted in Azure AD, set the value of an attribute in the target application.|Not supported.| |When a user is deleted in Azure AD, delete the user in the target application|This is supported. Ensure that Delete is selected as one of the target object actions in the [attribute configuration experience](skip-out-of-scope-deletions.md).| **Known limitations**
-* If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app we will send a disable request. At that point, the user is not managed by the service and we will not send a delete request when they are deleted from the directory.
-* Provisioning a user that is disabled in Azure AD is not supported. They must be active in Azure AD before they are provisioned.
-* When a user goes from soft-deleted to active, the Azure AD provisioning service will activate the user in the target app, but will not automatically restore the group memberships. The target application should maintain the group memberships for the user in inactive state. If the target application does not support this, you can restart provisioning to update the group memberships.
+* If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app we will send a disable request. At that point, the user isn't managed by the service and we won't send a delete request when they're deleted from the directory.
+* Provisioning a user that is disabled in Azure AD isn't supported. They must be active in Azure AD before they're provisioned.
+* When a user goes from soft-deleted to active, the Azure AD provisioning service will activate the user in the target app, but won't automatically restore the group memberships. The target application should maintain the group memberships for the user in inactive state. If the target application doesn't support this, you can restart provisioning to update the group memberships.
**Recommendation**
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 01/29/2023 Last updated : 03/27/2023
Migrating user data doesn't remove or alter any data in the Multi-Factor Authent
The MFA Server Migration utility targets a single Azure AD group for all migration activities. You can add users directly to this group, or add other groups. You can also add them in stages during the migration.
-To begin the migration process, enter the name or GUID of the Azure AD group you want to migrate. Once complete, press Tab or click outside the window and the utility will begin searching for the appropriate group. The window will populate all users in the group. A large group can take several minutes to finish.
+To begin the migration process, enter the name or GUID of the Azure AD group you want to migrate. Once complete, press Tab or click outside the window to begin searching for the appropriate group. All users in the group are populated. A large group can take several minutes to finish.
To view attribute data for a user, highlight the user, and select **View**: :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/view-user.png" alt-text="Screenshot of how to view use settings.":::
-This window displays the attributes for the selected user in both Azure AD and the on-premises MFA Server. You can use this window to view how data was written to a user after theyΓÇÖve been migrated.
+This window displays the attributes for the selected user in both Azure AD and the on-premises MFA Server. You can use this window to view how data was written to a user after migration.
-The settings option allows you to change the settings for the migration process:
+The **Settings** option allows you to change the settings for the migration process:
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/settings.png" alt-text="Screenshot of settings.":::
The settings option allows you to change the settings for the migration process:
- The migration utility tries direct matching to UPN before using the on-premises Active Directory attribute. - If no match is found, it calls a Windows API to find the Azure AD UPN and get the SID, which it uses to search the MFA Server user list. - If the Windows API doesnΓÇÖt find the user or the SID isnΓÇÖt found in the MFA Server, then it will use the configured Active Directory attribute to find the user in the on-premises Active Directory, and then use the SID to search the MFA Server user list.-- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined
+- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined.
+- Synchronization server ΓÇô Allows the MFA Server Migration Sync service to run on a secondary MFA Server rather than only run on the primary. To configure the Migration Sync service to run on a secondary server, the `Configure-MultiFactorAuthMigrationUtility.ps1` script must be run on the server to register a certificate with the MFA Server Migration Utility app registration. The certificate is used to authenticate to Microsoft Graph.
-The migration process can be an automatic process, or a manual process.
+The migration process can be automatic or manual.
The manual process steps are: 1. To begin the migration process for a user or selection of multiple users, press and hold the Ctrl key while selecting each of the user(s) you wish to migrate. 1. After you select the desired users, click **Migrate Users** > **Selected users** > **OK**. 1. To migrate all users in the group, click **Migrate Users** > **All users in AAD group** > **OK**.
+1. You can migrate users even if they are unchanged. By default, the utility is set to **Only migrate users that have changed**. Click **Migrate all users** to re-migrate previously migrated users that are unchanged. Migrating unchanged users can be useful during testing if an administrator needs to reset a userΓÇÖs Azure MFA settings and wants to re-migrate them.
-For the automatic process, click **Automatic synchronization** in the settings dialog, and then select whether you want all users to be synced, or only members of a given Azure AD group.
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/migrate-users.png" alt-text="Screenshot of Migrate users dialog.":::
+
+For the automatic process, click **Automatic synchronization** in **Settings**, and then select whether you want all users to be synced, or only members of a given Azure AD group.
The following table lists the sync logic for the various methods.
active-directory Concept Conditional Access Report Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-report-only.md
Previously updated : 01/24/2023 Last updated : 03/30/2023
Report-only mode is a new Conditional Access policy state that allows administra
> [!WARNING] > Policies in report-only mode that require compliant devices may prompt users on Mac, iOS, and Android to select a device certificate during policy evaluation, even though device compliance is not enforced. These prompts may repeat until the device is made compliant. To prevent end users from receiving prompts during sign-in, exclude device platforms Mac, iOS and Android from report-only policies that perform device compliance checks. Note that report-only mode is not applicable for Conditional Access policies with "User Actions" scope.
-![Report-only tab in Azure AD sign-in log](./media/concept-conditional-access-report-only/report-only-detail-in-sign-in-log.png)
+![Screenshot showing the report-only tab in a sign-in log.](./media/concept-conditional-access-report-only/report-only-detail-in-sign-in-log.png)
## Policy results
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md
Previously updated : 02/27/2023 Last updated : 03/28/2023
For more information, see the article [Configure authentication session manageme
- **Disable** only work when **All cloud apps** are selected, no conditions are selected, and **Disable** is selected under **Session** > **Customize continuous access evaluation** in a Conditional Access policy. You can choose to disable all users or specific users and groups. - :::image type="content" source="media/concept-conditional-access-session/continuous-access-evaluation-session-controls.png" alt-text="CAE Settings in a new Conditional Access policy in the Azure portal." lightbox="media/concept-conditional-access-session/continuous-access-evaluation-session-controls.png":::
-## Disable resilience defaults (Preview)
+## Disable resilience defaults
During an outage, Azure AD extends access to existing sessions while enforcing Conditional Access policies. If resilience defaults are disabled, access is denied once existing sessions expire. For more information, see the article [Conditional Access: Resilience defaults](resilience-defaults.md).
+## Require token protection for sign-in sessions (preview)
+
+Token protection (sometimes referred to as token binding in the industry) attempts to reduce attacks using token theft by ensuring a token is usable only from the intended device. When an attacker is able to steal a token, by hijacking or replay, they can impersonate their victim until the token expires or is revoked. Token theft is thought to be a relatively rare event, but the damage from it can be significant.
+
+The preview works for specific scenarios only. For more information, see the article [Conditional Access: Token protection (preview)](concept-token-protection.md).
+ ## Next steps - [Conditional Access common policies](concept-conditional-access-policy-common.md)
active-directory Howto Conditional Access Insights Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md
Previously updated : 02/27/2023 Last updated : 03/28/2023
To access the insights and reporting workbook:
The insights and reporting dashboard lets you see the impact of one or more Conditional Access policies over a specified period. Start by setting each of the parameters at the top of the workbook.
-![Conditional Access Insights and Reporting dashboard in the Azure portal](./media/howto-conditional-access-insights-reporting/conditional-access-insights-and-reporting-dashboard.png)
**Conditional Access policy**: Select one or more Conditional Access policies to view their combined impact. Policies are separated into two groups: Enabled and Report-only policies. By default, all Enabled policies are selected. These enabled policies are the policies currently enforced in your tenant.
The insights and reporting dashboard lets you see the impact of one or more Cond
Once the parameters have been set, the impact summary loads. The summary shows how many users or sign-ins during the time range resulted in ΓÇ£SuccessΓÇ¥, ΓÇ£FailureΓÇ¥, ΓÇ¥User action requiredΓÇ¥ or ΓÇ£Not appliedΓÇ¥ when the selected policies were evaluated.
-![Impact summary in the Conditional Access workbook](./media/howto-conditional-access-insights-reporting/workbook-impact-summary.png)
+![Screenshot showing an example impact summary in the Conditional Access workbook.](./media/howto-conditional-access-insights-reporting/workbook-impact-summary.png)
**Total**: The number of users or sign-ins during the time period where at least one of the selected policies was evaluated.
Once the parameters have been set, the impact summary loads. The summary shows h
### Understanding the impact
-![Workbook breakdown per condition and status](./media/howto-conditional-access-insights-reporting/workbook-breakdown-condition-and-status.png)
+![Screenshot showing a workbook breakdown per condition and status.](./media/howto-conditional-access-insights-reporting/workbook-breakdown-condition-and-status.png)
View the breakdown of users or sign-ins for each of the conditions. You can filter the sign-ins of a particular result (for example, Success or Failure) by selecting on of the summary tiles at the top of the workbook. You can see the breakdown of sign-ins for each of the Conditional Access conditions: device state, device platform, client app, location, application, and sign-in risk. ## Sign-in details
-![Workbook sign-in details](./media/howto-conditional-access-insights-reporting/workbook-sign-in-details.png)
+![Screenshot showing workbook sign-in details.](./media/howto-conditional-access-insights-reporting/workbook-sign-in-details.png)
-You can also investigate the sign-ins of a specific user by searching for sign-ins at the bottom of the dashboard. The query on the left displays the most frequent users. Selecting a user filters the query to the right.
+You can also investigate the sign-ins of a specific user by searching for sign-ins at the bottom of the dashboard. The query displays the most frequent users. Selecting a user filters the query.
> [!NOTE] > When downloading the Sign-ins logs, choose JSON format to include Conditional Access report-only result data.
In order to access the workbook, you need the proper Azure AD permissions and Lo
1. Type `SigninLogs` into the query box and select **Run**. 1. If the query doesn't return any results, your workspace may not have been configured correctly.
-![Troubleshoot failing queries](./media/howto-conditional-access-insights-reporting/query-troubleshoot-sign-in-logs.png)
+![Screenshot showing how to troubleshoot failing queries.](./media/howto-conditional-access-insights-reporting/query-troubleshoot-sign-in-logs.png)
For more information about how to stream Azure AD sign-in logs to a Log Analytics workspace, see the article [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md).
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
After confirming your settings using [report-only mode](howto-conditional-access
[Conditional Access common policies](concept-conditional-access-policy-common.md)
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+[Migrate approved client app to application protection policy in Conditional Access](migrate-approved-client-app.md)
active-directory Migrate Approved Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/migrate-approved-client-app.md
+
+ Title: Migrate approved client app to application protection policy in Conditional Access
+description: The approved client app control is going away. Migrate to App protection policies.
+++++ Last updated : 03/28/2023++++++++
+# Migrate approved client app to application protection policy in Conditional Access
+
+In this article, you learn how to migrate from the approved client app Conditional Access grant to the application protection policy grant. App protection policies provide the same data loss and protection as approved client app policies, but with other benefits. For more information about the benefits of using app protection policies, see the articleΓÇ»[App protection policies overview](/mem/intune/apps/app-protection-policy).
+
+The approved client app grant is retiring in early March 2026. Organizations must transition all current Conditional Access policies that use only the Require Approved Client App grant to Require Approved Client App or Application Protection Policy by March 2026. Additionally, for any new Conditional Access policy, only apply the Require application protection policy grant.
+
+After March 2026, Microsoft will stop enforcing require approved client app control, and it will be as if this grant isn't selected. Use the following steps before March 2026 to protect your organizationΓÇÖs data.
+
+## Edit an existing Conditional Access policy
+
+Require approved client apps or app protection policy with mobile devices
+
+The following steps make an existing Conditional Access policy require an approved client app or an app protection policy when using an iOS/iPadOS or Android device. This policy works in tandem with an app protection policy created in Microsoft Intune.
+
+Organizations can choose to update their policies using the following steps.
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select a policy that uses the approved client app grant.
+1. Under **Access controls** > **Grant**, select **Grant access**.
+ 1. Select **Require approved client app** and **Require app protection policy**
+ 1. **For multiple controls** select **Require one of the selected controls**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+Repeat the previous steps on all of your policies that use the approved client app grant.
+
+> [!WARNING]
+> Not all applications that are supported as approved applications or support application protection policies. For a list of some common client apps, seeΓÇ»[App protection policy requirement](concept-conditional-access-grant.md#require-app-protection-policy). If your application is not listed there, contact the application developer.
+
+## Create a Conditional Access policy
+
+Require app protection policy with mobile devices
+
+The following steps help create a Conditional Access policy requiring an approved client app or an app protection policy when using an iOS/iPadOS or Android device. This policy works in tandem with an [app protection policy created in Microsoft Intune](/mem/intune/apps/app-protection-policies).
+
+Organizations can choose to deploy this policy using the following steps.
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy.
+1. Under **Cloud apps or actions**, select **All cloud apps**.
+1. Under **Conditions** > **Device platforms**, set **Configure** to **Yes**.
+ 1. Under **Include**, **Select device platforms**.
+ 1. Choose **Android** and **iOS**
+ 1. Select **Done**.
+1. Under **Access controls** > **Grant**, select **Grant access**.
+ 1. Select **Require approved client app** and **Require app protection policy**
+ 1. **For multiple controls** select **Require one of the selected controls**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+> [!NOTE]
+> If an app does not support **Require app protection policy**, end users trying to access resources from that app will be blocked.
+
+## Next steps
+
+For more information on application protection policies, see:
+
+[App protection policies overview](/mem/intune/apps/app-protection-policy)
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Previously updated : 05/26/2022 Last updated : 03/30/2023
For more videos, see:
## What can I do with terms of use?
-Azure AD terms of use policies have the following capabilities:
--- Require employees or guests to accept your terms of use policy before getting access.-- Require employees or guests to accept your terms of use policy on every device before getting access.-- Require employees or guests to accept your terms of use policy on a recurring schedule.-- Require employees or guests to accept your terms of use policy before registering security information in Azure AD Multifactor Authentication (MFA).-- Require employees to accept your terms of use policy before registering security information in Azure AD self-service password reset (SSPR).-- Present a general terms of use policy for all users in your organization.-- Present specific terms of use policies based on a user attributes (such as doctors versus nurses, or domestic versus international employees) by using [dynamic groups](../enterprise-users/groups-dynamic-membership.md)).-- Present specific terms of use policies when accessing high business impact applications, like Salesforce.-- Present terms of use policies in different languages.-- List who has or hasn't accepted to your terms of use policies.-- Help meeting privacy regulations.-- Display a log of terms of use policy activity for compliance and audit.-- Create and manage terms of use policies using [Microsoft Graph APIs](/graph/api/resources/agreement).
+Organizations can use terms of use along with Conditional Access policies to require employees or guests to accept your terms of use policy before getting access. These terms of use statements can be generalized or specific to groups or users and provided in multiple languages. Administrators can determine who has or hasn't accepted terms of use with the provided logs or APIs.
## Prerequisites To use and configure Azure AD terms of use policies, you must have: -- Azure AD Premium P1, P2, EMS E3, or EMS E5 licenses.
- - If you don't have one of these subscriptions, you can [get Azure AD Premium](../fundamentals/active-directory-get-started-premium.md) or [enable Azure AD Premium trial](https://azure.microsoft.com/trial/get-started-active-directory/).
-- One of the following administrator accounts for the directory you want to configure:
- - Global Administrator
- - Security Administrator
- - Conditional Access Administrator
+* A working Azure AD tenant with Azure AD Premium P1, or trial license enabled. If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Administrators who interact with terms of use must have one or more of the following role assignments depending on the tasks they're performing. To follow the [Zero Trust principle of least privilege](/security/zero-trust/), consider using [Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) to just-in-time activate privileged role assignments.
+ * Read terms of use configuration and Conditional Access policies
+ * [Security Reader](../roles/permissions-reference.md#security-reader)
+ * [Global Reader](../roles/permissions-reference.md#global-reader)
+ * Create or modify terms of use and Conditional Access policies
+ * [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator)
+ * [Security Administrator](../roles/permissions-reference.md#security-administrator)
## Terms of use document
Azure AD terms of use policies use the PDF format to present content. The PDF fi
Once you've completed your terms of use policy document, use the following procedure to add it.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator or Security Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. 1. Select, **New terms**. ![New term of use pane to specify your terms of use settings](./media/terms-of-use/new-tou.png)
-1. In the **Name** box, enter a name for the terms of use policy that will be used in the Azure portal.
+1. In the **Name** box, enter a name for the terms of use policy used in the Azure portal.
1. For **Terms of use document**, browse to your finalized terms of use policy PDF and select it.
-1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user will see will be based on their browser preferences.
+1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user sees is based on their browser preferences.
1. In the **Display name** box, enter a title that users see when they sign in. 1. To require end users to view the terms of use policy before accepting them, set **Require users to expand the terms of use** to **On**. 1. To require end users to accept your terms of use policy on every device they're accessing from, set **Require users to consent on every device** to **On**. Users may be required to install other applications if this option is enabled. For more information, see [Per-device terms of use](#per-device-terms-of-use).
Once you've completed your terms of use policy document, use the following proce
| Expire starting on | Frequency | Result | | | | | | Today's date | Monthly | Starting today, users must accept the terms of use policy and then reaccept every month. |
- | Date in the future | Monthly | Starting today, users must accept the terms of use policy. When the future date occurs, consents will expire, and then users must reaccept every month. |
+ | Date in the future | Monthly | Starting today, users must accept the terms of use policy. When the future date occurs, consents expire, and then users must reaccept every month. |
For example, if you set the expire starting on date to **Jan 1** and frequency to **Monthly**, this is how expirations might occur for two users:
Once you've completed your terms of use policy document, use the following proce
| Alice | Jan 1 | Feb 1 | Mar 1 | Apr 1 | | Bob | Jan 15 | Feb 1 | Mar 1 | Apr 1 |
-1. Use the **Duration before re-acceptance required (days)** setting to specify the number of days before the user must reaccept the terms of use policy. This allows users to follow their own schedule. For example, if you set the duration to **30** days, this is how expirations might occur for two users:
+1. Use the **Duration before re-acceptance required (days)** setting to specify the number of days before the user must reaccept the terms of use policy. This option allows users to follow their own schedule. For example, if you set the duration to **30** days, this is how expirations might occur for two users:
| User | First accept date | First expire date | Second expire date | Third expire date | | | | | | |
Once you've completed your terms of use policy document, use the following proce
| Template | Description | | | |
- | **Custom policy** | Select the users, groups, and apps that this terms of use policy will be applied to. |
- | **Create Conditional Access policy later** | This terms of use policy will appear in the grant control list when creating a Conditional Access policy. |
+ | **Custom policy** | Select the users, groups, and apps that this terms of use policy is applied to. |
+ | **Create Conditional Access policy later** | This terms of use policy appears in the grant control list when creating a Conditional Access policy. |
> [!IMPORTANT] > Conditional Access policy controls (including terms of use policies) do not support enforcement on service accounts. We recommend excluding all service accounts from the Conditional Access policy.
To get started with Azure AD audit logs, use the following procedure:
## What terms of use looks like for users
-Once a ToU policy is created and enforced, users, who are in scope, will see the following screen during sign-in.
+Once a ToU policy is created and enforced, users, who are in scope, see the following screen during sign-in.
![Example terms of use that appears when a user signs in](./media/terms-of-use/user-tou.png)
You can edit some details of terms of use policies, but you can't modify an exis
1. In the Edit terms of use pane, you can change the following options: - **Name** ΓÇô the internal name of the ToU that isn't shared with end users - **Display name** ΓÇô the name that end users can see when viewing the ToU
- - **Require users to expand the terms of use** ΓÇô Setting this option to **On** will force the end user to expand the terms of use policy document before accepting it.
+ - **Require users to expand the terms of use** ΓÇô Setting this option to **On** forces the end user to expand the terms of use policy document before accepting it.
- (Preview) You can **update an existing terms of use** document - You can add a language to an existing ToU
You can edit some details of terms of use policies, but you can't modify an exis
![Edit terms of use pane showing name and expand options](./media/terms-of-use/edit-terms-use.png) 1. In the pane on the right, upload the pdf for the new version
-1. There's also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they'll be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent will stay current and only new users who haven't consented before or whose consent expires will see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept, delete and recreate or create a new TOU for this case.
+1. There's also a toggle option here **Require reaccept** if you want to require your users to accept this new version the next time they sign in. If you require your users to reaccept, next time they try to access the resource defined in your conditional access policy they'll be prompted to accept this new version. If you donΓÇÖt require your users to reaccept, their previous consent stays current and only new users who haven't consented before or whose consent expires see the new version. Until the session expires, **Require reaccept** not require users to accept the new TOU. If you want to ensure reaccept, delete and recreate or create a new TOU for this case.
![Edit terms of use re-accept option highlighted](./media/terms-of-use/re-accept.png) 1. Once you've uploaded your new pdf and decided on reaccept, select Add at the bottom of the pane.
-1. You'll now see the most recent version under the Document column.
+1. You see the most recent version under the Document column.
## View previous versions of a ToU
The following procedure describes how to add a ToU language.
## Per-device terms of use
-The **Require users to consent on every device** setting enables you to require end users to accept your terms of use policy on every device they're accessing from. The end user will be required to register their device in Azure AD. When the device is registered, the device ID is used to enforce the terms of use policy on each device.
+The **Require users to consent on every device** setting enables you to require end users to accept your terms of use policy on every device they're accessing from. The end user is required to register their device in Azure AD. When the device is registered, the device ID is used to enforce the terms of use policy on each device.
Supported platforms and software.
Supported platforms and software.
> | **Internet Explorer** | Yes | Yes | Yes | | > | **Chrome (with extension)** | Yes | Yes | Yes | |
-Per-device terms of use has the following constraints:
+Per-device terms of use have the following constraints:
- A device can only be joined to one tenant. - A user must have permissions to join their device. - The Intune Enrollment app isn't supported. Ensure that it's excluded from any Conditional Access policy requiring Terms of Use policy. - Azure AD B2B users aren't supported.
-If the user's device isn't joined, they'll receive a message that they need to join their device. Their experience will be dependent on the platform and software.
+If the user's device isn't joined, they receive a message that they need to join their device. Their experience is dependent on the platform and software.
### Join a Windows 10 device
If a user is using Windows 10 and Microsoft Edge, they receive a message similar
![Windows 10 and Microsoft Edge - Message indicating your device must be registered](./media/terms-of-use/per-device-win10-edge.png)
-If they're using Chrome, they'll be prompted to install the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji).
+If they're using Chrome, they're prompted to install the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji).
### Register an iOS device
-If a user is using an iOS device, they'll be prompted to install the [Microsoft Authenticator app](https://apps.apple.com/us/app/microsoft-authenticator/id983156458).
+If a user is using an iOS device, they're prompted to install the [Microsoft Authenticator app](https://apps.apple.com/us/app/microsoft-authenticator/id983156458).
### Register an Android device
-If a user is using an Android device, they'll be prompted to install the [Microsoft Authenticator app](https://play.google.com/store/apps/details?id=com.azure.authenticator).
+If a user is using an Android device, they're prompted to install the [Microsoft Authenticator app](https://play.google.com/store/apps/details?id=com.azure.authenticator).
### Browsers
-If a user is using browser that isn't supported, they'll be asked to use a different browser.
+If a user is using browser that isn't supported, they're asked to use a different browser.
![Message indicating your device must be registered, but browser is not supported](./media/terms-of-use/per-device-browser-unsupported.png)
User acceptance records are deleted:
## Policy changes
-Conditional Access policies take effect immediately. When this happens, the administrator will start to see ΓÇ£sad cloudsΓÇ¥ or "Azure AD token issues". The administrator must sign out and sign in to satisfy the new policy.
+Conditional Access policies take effect immediately. When this happens, the administrator starts to see ΓÇ£sad cloudsΓÇ¥ or "Azure AD token issues". The administrator must sign out and sign in to satisfy the new policy.
> [!IMPORTANT] > Users in scope will need to sign-out and sign-in in order to satisfy a new policy if:
Terms of use policies can be used for different cloud apps, such as Azure Inform
### Azure Information Protection
-You can configure a Conditional Access policy for the Azure Information Protection app and require a terms of use policy when a user accesses a protected document. This configuration will trigger a terms of use policy before a user accessing a protected document for the first time.
+You can configure a Conditional Access policy for the Azure Information Protection app and require a terms of use policy when a user accesses a protected document. This configuration triggers a terms of use policy before a user accessing a protected document for the first time.
![Cloud apps pane with Microsoft Azure Information Protection app selected](./media/terms-of-use/cloud-app-info-protection.png)
A: The user counts in the terms of use report and who accepted/declined are stor
A: The terms of use details overview data is stored for the lifetime of that terms of use policy, while the Azure AD audit logs are stored for 30 days. **Q: Why do I see a different number of consents in the terms of use details overview versus the exported CSV report?**<br />
-A: The terms of use details overview reflects aggregated acceptances of the current version of the policy (updated once every day). If expiration is enabled or a TOU agreement is updated (with re-acceptance required), the count on the details overview is reset since the acceptances are expired, thereby showing the count of the current version. All acceptance history is still captured in the CSV report.
+A: The terms of use details overview reflect aggregated acceptances of the current version of the policy (updated once every day). If expiration is enabled or a TOU agreement is updated (with reacceptance required), the count on the details overview is reset since the acceptances are expired, thereby showing the count of the current version. All acceptance history is still captured in the CSV report.
**Q: If hyperlinks are in the terms of use policy PDF document, will end users be able to click them?**<br /> A: Yes, end users are able to select hyperlinks to other pages but links to sections within the document aren't supported. Also, hyperlinks in terms of use policy PDFs don't work when accessed from the Azure AD MyApps/MyAccount portal.
A: The user is blocked from getting access to the application. The user would ha
A: You can [review previously accepted terms of use policies](#how-users-can-review-their-terms-of-use), but currently there isn't a way to unaccept. **Q: What happens if I'm also using Intune terms and conditions?**<br />
-A: If you've configured both Azure AD terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user will be required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
+A: If you've configured both Azure AD terms of use and [Intune terms and conditions](/intune/terms-and-conditions-create), the user is required to accept both. For more information, see the [Choosing the right Terms solution for your organization blog post](https://go.microsoft.com/fwlink/?linkid=2010506&clcid=0x409).
**Q: What endpoints does the terms of use service use for authentication?**<br />
-A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com, https://myaccount.microsoft.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you'll need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
+A: Terms of use utilize the following endpoints for authentication: https://tokenprovider.termsofuse.identitygovernance.azure.com, https://myaccount.microsoft.com and https://account.activedirectory.windowsazure.com. If your organization has an allowlist of URLs for enrollment, you need to add these endpoints to your allowlist, along with the Azure AD endpoints for sign-in.
## Next steps
active-directory App Only Access Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-only-access-primer.md
In most cases, application-only access is broader and more powerful than [delega
In contrast, you should never use application-only access where a user would normally sign in to manage their own resources. These types of scenarios must use delegated access to be least privileged.
-![Diagram shows illustration of application permissions vs delegated permissions.](./media/permissions-consent-overview/delegated-app-only-permissions.png)
+![Diagram shows illustration of application permissions vs delegated permissions.](./media/app-only-access-primer/app-only-access.png)
active-directory Console Quickstart Portal Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-quickstart-portal-nodejs.md
-+ Last updated 08/22/2022
active-directory Daemon Quickstart Portal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-java.md
-+ Last updated 08/22/2022
active-directory Daemon Quickstart Portal Netcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-netcore.md
-+ Last updated 08/22/2022
active-directory Daemon Quickstart Portal Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-python.md
-+ Last updated 08/22/2022
active-directory Delegated Access Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-access-primer.md
People frequently use different applications to access their data from cloud ser
Use delegated access whenever you want to let a signed-in user work with their own resources or resources they can access. Whether itΓÇÖs an admin setting up policies for their entire organization or a user deleting an email in their inbox, all scenarios involving user actions should use delegated access.
-![Diagram shows illustration of delegated permissions vs application permissions.](./media/permissions-consent-overview/delegated-app-only-permissions.png)
+![Diagram shows illustration of delegated access scenario.](./media/delegated-access-primer/delegated-access.png)
In contrast, delegated access is usually a poor choice for scenarios that must run without a signed-in user, like automation. It may also be a poor choice for scenarios that involve accessing many usersΓÇÖ resources, like data loss prevention or backups. Consider using [application-only access](permissions-consent-overview.md) for these types of operations.
active-directory Desktop Quickstart Portal Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-nodejs-desktop.md
-+ Last updated 08/18/2022
active-directory Desktop Quickstart Portal Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-uwp.md
-+ Last updated 08/18/2022
active-directory Desktop Quickstart Portal Wpf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-wpf.md
-+ Last updated 08/18/2022
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
-+ Last updated 02/15/2022
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
-+ Last updated 02/15/2022
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-android.md
-+ Last updated 01/14/2022
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
-+ Last updated 12/09/2022
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
-+ Last updated 01/11/2022
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-ios.md
-+ Last updated 01/14/2022
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-daemon.md
-+ Last updated 01/10/2022
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-webapp.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
-+ Last updated 11/12/2021
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
-+ Last updated 11/12/2021
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
-+ Last updated 11/12/2021
active-directory Quickstart V2 Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript.md
-+ Last updated 04/11/2019
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
-+ Last updated 01/10/2022
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-console.md
-+ Last updated 01/10/2022
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
-+ Last updated 01/14/2022
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
-+ Last updated 11/22/2021
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-daemon.md
-+ Last updated 01/10/2022
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-webapp.md
-+ Last updated 01/24/2023
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-uwp.md
-+ Last updated 01/14/2022
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-windows-desktop.md
-+ Last updated 01/14/2022
active-directory Spa Quickstart Portal Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-angular.md
-+ Last updated 08/16/2022
active-directory Spa Quickstart Portal Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-react.md
-+ Last updated 08/16/2022
active-directory Spa Quickstart Portal Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code.md
-+ Last updated 08/16/2022
active-directory Web Api Quickstart Portal Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-aspnet-core.md
-+ Last updated 08/16/2022
active-directory Web Api Quickstart Portal Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-dotnet-native-aspnet.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet-core.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-java.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Node Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js.md
-+ Last updated 08/16/2022
active-directory Web App Quickstart Portal Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-python.md
-+ Last updated 08/16/2022
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Because subdomains inherit the authentication type of the root domain by default
Use the following command to promote the subdomain: ```http
-POST https://graph.microsoft.com/{tenant-id}/domains/foo.contoso.com/promote
+POST https://graph.microsoft.com/v1.0/{tenant-id}/domains/foo.contoso.com/promote
``` ### Promote command error conditions
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 03/28/2023 Last updated : 03/30/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on March 28th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on March 30th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 F5 Compliance Add-on GCC | SPE_F5_COMP_GCC | 3f17cf90-67a2-4fdb-8587-37c1539507e1 | Customer Lockbox for Government (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery for Government (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | Microsoft 365 F5 Security Add-on | SPE_F5_SEC | 67ffe999-d9ca-49e1-9d2c-03fb28aa7a48 | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | | Microsoft 365 F5 Security + Compliance Add-on | SPE_F5_SECCOMP | 32b47245-eb31-44fc-b945-a8b1576c439f | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>BPOS_S_DlpAddOn (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Loss Prevention (9bec7e34-c9fa-40b7-a9d1-bd6d1165c7ed)<br/>Exchange Online Archiving (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) |
-| Microsoft Flow Free | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | COMMON DATA SERVICE - VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170) |
+| Microsoft Power Automate Free | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | COMMON DATA SERVICE (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FREE (50e68c76-46c6-4674-81f9-75456511b170) |
| Microsoft 365 E5 Suite Features | M365_E5_SUITE_COMPONENTS | 99cc8282-2f74-4954-83b7-c6a9a1999067 | Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e) | Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e) | | Microsoft 365 F1 | M365_F1_COMM | 50f60901-3181-4b75-8a2c-4c8e4c1d5a72 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>VIVAENGAGE_CORE (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/> RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Viva Engage Core (a82fbf69-b4d7-49f4-83a6-915b2cf354f4)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 F3 GCC | M365_F1_GOV | 2a914830-d700-444a-b73c-e3f31980d833 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_F1_GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>CDS_O365_F1_GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>EXCHANGE_S_DESKLESS_GOV (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>FORMS_GOV_F1 (bfd4133a-bbf3-4212-972b-60412137c428)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K_GOV (d65648f1-9504-46e4-8611-2658763f28b8)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708- 6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_S1_GOV (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>FLOW_O365_S1_GOV (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SHAREPOINTDESKLESS_GOV (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>MCOIMP_GOV (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service - O365 F1 GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>Common Data Service for Teams_F1 GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>Exchange Online (Kiosk) for Government (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>Forms for Government (Plan F1) (bfd4133a-bbf3-4212-972b-60412137c428)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (F1) (d65648f1-9504-46e4-8611-2658763f28b8)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>Power Apps for Office 365 F3 for Government (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>Power Automate for Office 365 F3 for Government (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SharePoint KioskG (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>Skype for Business Online (Plan 1) for Government (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
active-directory Concept Secure Remote Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md
Previously updated : 02/27/2023 Last updated : 03/28/2023
There are many recommendations that Azure AD Free, Office 365, or Microsoft 365
| [Enable ADFS smart lock out](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection) (If applicable) | Protects your users from experiencing extranet account lockout from malicious activity. | | [Enable Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md) (if using managed identities) | Smart lockout helps to lock out bad actors who are trying to guess your users' passwords or use brute-force methods to get in. | | [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. |
-| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO) |
+| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of preintegrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO) |
| [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable) | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. | | [Enable self-service password reset](../authentication/tutorial-enable-sspr.md) (applicable to cloud only accounts) | This ability reduces help desk calls and loss of productivity when a user can't sign into their device or an application. |
The following table is intended to highlight the key actions for the following l
| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. | | [Enable remote access to on-premises legacy applications with Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md) | Enable Azure AD Application Proxy and integrate with legacy apps for users to securely access on-premises applications by signing in with their Azure AD account. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable). | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. |
-| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). |
+| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of preintegrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). |
| [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Conditional Access ΓÇô Device based](../conditional-access/require-managed-devices.md) | Improve security and user experiences with device-based Conditional Access. This step ensures users can only access from devices that meet your standards for security and compliance. These devices are also known as managed devices. Managed devices can be Intune compliant or Hybrid Azure AD joined devices. | | [Enable Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md) | Protect users from using weak and easy to guess passwords. |
The following table is intended to highlight the key actions for the following l
| [Disable end-user consent to applications](../manage-apps/configure-user-consent.md) | The admin consent workflow gives admins a secure way to grant access to applications that require admin approval so end users don't expose corporate data. Microsoft recommends disabling future user consent operations to help reduce your surface area and mitigate this risk. | | [Enable remote access to on-premises legacy applications with Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md) | Enable Azure AD Application Proxy and integrate with legacy apps for users to securely access on-premises applications by signing in with their Azure AD account. | | [Enable Secure hybrid access: Secure legacy apps with existing app delivery controllers and networks](../manage-apps/secure-hybrid-access.md) (if applicable). | Publish and protect your on-premises and cloud legacy authentication applications by connecting them to Azure AD with your existing application delivery controller or network. |
-| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of pre-integrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). |
+| [Integrate supported SaaS applications from the gallery to Azure AD and enable Single sign on](../manage-apps/add-application-portal.md) | Azure AD has a gallery that contains thousands of preintegrated applications. Some of the applications your organization uses are probably in the gallery accessible directly from the Azure portal. Provide access to corporate SaaS applications remotely and securely with improved user experience (SSO). |
| [Automate user provisioning and deprovisioning from SaaS Applications](../app-provisioning/user-provisioning.md) (if applicable) | Automatically create user identities and roles in the cloud (SaaS) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change, increasing your organization's security. | | [Enable Conditional Access ΓÇô Device based](../conditional-access/require-managed-devices.md) | Improve security and user experiences with device-based Conditional Access. This step ensures users can only access from devices that meet your standards for security and compliance. These devices are also known as managed devices. Managed devices can be Intune compliant or Hybrid Azure AD joined devices. | | [Enable Password Protection](../authentication/howto-password-ban-bad-on-premises-deploy.md) | Protect users from using weak and easy to guess passwords. |
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## March 2023
++
+### Public Preview - New provisioning connectors in the Azure AD Application Gallery - March 2023
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps:
+
+- [Acunetix 360](../saas-apps/acunetix-360-provisioning-tutorial.md)
+- [Akamai Enterprise Application Access](../saas-apps/akamai-enterprise-application-access-provisioning-tutorial.md)
+- [Ardoq](../saas-apps/ardoq-provisioning-tutorial.md)
+- [Torii](../saas-apps/torii-provisioning-tutorial.md)
++
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+### General Availability - Workload identity Federation for Managed Identities
+
+**Type:** New feature
+**Service category:** Managed identities for Azure resources
+**Product capability:** Developer Experience
+
+Workload Identity Federation enables developers to use managed identities for their software workloads running anywhere and access Azure resources without needing secrets. Key scenarios include:
+- Accessing Azure resources from Kubernetes pods running in any cloud or on-premises
+- GitHub workflows to deploy to Azure, no secrets necessary
+- Accessing Azure resources from other cloud platforms that support OIDC, such as Google Cloud Platform.
+
+For more information, see:
+- [Workload identity federation](../workload-identities/workload-identity-federation.md).
+- [Configure a user-assigned managed identity to trust an external identity provider (preview)](../workload-identities/workload-identity-federation-create-trust-user-assigned-managed-identity.md)
+- [Use Azure AD workload identity (preview) with Azure Kubernetes Service (AKS)](../../aks/workload-identity-overview.md)
+++
+### Public Preview - New My Groups Experience
+
+**Type:** Changed feature
+**Service category:** Group Management
+**Product capability:** End User Experiences
+
+A new and improved My Groups experience is now available at https://www.myaccount.microsoft.com/groups. My Groups enables end users to easily manage groups, such as finding groups to join, managing groups they own, and managing existing group memberships. Based on customer feedback, the new My Groups support sorting and filtering on lists of groups and group members, a full list of group members in large groups, and an actionable overview page for membership requests.
+This experience replaces the existing My Groups experience at https://www.mygroups.microsoft.com in May.
++
+For more information, see: [Update your Groups info in the My Apps portal](https://support.microsoft.com/account-billing/update-your-groups-info-in-the-my-apps-portal-bc0ca998-6d3a-42ac-acb8-e900fb1174a4).
+++
+### Public preview - Customize tokens with Custom Claims Providers
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** Extensibility
+
+A custom claims provider lets you call an API and map custom claims into the token during the authentication flow. The API call is made after the user has completed all their authentication challenges, and a token is about to be issued to the app. For more information, see: [Custom authentication extensions (preview)](../develop/custom-claims-provider-overview.md).
+++
+### General Availability - Converged Authentication Methods
+
+**Type:** New feature
+**Service category:** MFA
+**Product capability:** User Authentication
+
+The Converged Authentication Methods Policy enables you to manage all authentication methods used for MFA and SSPR in one policy, migrate off the legacy MFA and SSPR policies, and target authentication methods to groups of users instead of enabling them for all users in your tenant. For more information, see: [Manage authentication methods](../authentication/concept-authentication-methods-manage.md).
+++
+### General Availability - Provisioning Insights Workbook
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Monitoring & Reporting
+
+This new workbook makes it easier to investigate and gain insights into your provisioning workflows in a given tenant. This includes HR-driven provisioning, cloud sync, app provisioning, and cross-tenant sync.
+
+Some key questions this workbook can help answer are:
+
+- How many identities have been synced in a given time range?
+- How many create, delete, update, or other operations were performed?
+- How many operations were successful, skipped, or failed?
+- What specific identities failed? And what step did they fail on?
+- For any given user, what tenants / applications were they provisioned or deprovisioned to?
+
+For more information, see: [Provisioning insights workbook](../app-provisioning/provisioning-workbook.md).
+++
+### General Availability - Number Matching for Microsoft Authenticator notifications
+
+**Type:** Plan for Change
+**Service category:** Microsoft Authenticator App
+**Product capability:** User Authentication
+
+Microsoft Authenticator appΓÇÖs number matching feature has been Generally Available since Nov 2022! If you haven't already used the rollout controls (via Azure portal Admin UX and MSGraph APIs) to smoothly deploy number matching for users of Microsoft Authenticator push notifications, we highly encourage you to do so. We previously announced that we'll remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting February 27, 2023. After listening to customers, we'll extend the availability of the rollout controls for a few more weeks. Organizations can continue to use the existing rollout controls until May 8, 2023, to deploy number matching in their organizations. Microsoft services will start enforcing the number matching experience for all users of Microsoft Authenticator push notifications after May 8, 2023. We'll also remove the rollout controls for number matching after that date.
+
+If customers donΓÇÖt enable number match for all Microsoft Authenticator push notifications prior to May 8, 2023, Authenticator users may experience inconsistent sign-ins while the services are rolling out this change. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
+
+For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md)
+++
+### Public Preview - IPv6 coming to Azure AD
+
+**Type:** Plan for Change
+**Service category:** Identity Protection
+**Product capability:** Platform
+
+Earlier, we announced our plan to bring IPv6 support to Microsoft Azure Active Directory (Azure AD), enabling our customers to reach the Azure AD services over IPv4, IPv6 or dual stack endpoints. This is just a reminder that we have started introducing IPv6 support into Azure AD services in a phased approach in late March 2023.
+
+If you utilize Conditional Access or Identity Protection, and have IPv6 enabled on any of your devices, you likely must take action to avoid impacting your users. For most customers, IPv4 won't completely disappear from their digital landscape, so we aren't planning to require IPv6 or to deprioritize IPv4 in any Azure AD features or services. We'll continue to share additional guidance on IPv6 enablement in Azure AD at this link: [IPv6 support in Azure Active Directory](https://learn.microsoft.com/troubleshoot/azure/active-directory/azure-ad-ipv6-support)
+++
+### General Availability - Microsoft cloud settings for Azure AD B2B
+
+**Type:** New feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds:
+
+- Microsoft Azure commercial and Microsoft Azure Government
+- Microsoft Azure commercial and Microsoft Azure China 21Vianet
+
+For more information about Microsoft cloud settings for B2B collaboration., see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
+++
+### Modernizing Terms of Use Experiences
+
+**Type:** Plan for Change
+**Service category:** Access Reviews
+**Product capability:** AuthZ/Access Delegation
+
+Starting July 2023, we're modernizing the following Terms of Use end user experiences with an updated PDF viewer, and moving the experiences from https://account.activedirectory.windowsazure.com to https://myaccount.microsoft.com:
+- View previously accepted terms of use.
+- Accept or decline terms of use as part of the sign-in flow.
+
+No functionalities will be removed. The new PDF viewer adds functionality and the limited visual changes in the end-user experiences will be communicated in a future update. If your organization has allow-listed only certain domains, you must ensure your allowlist includes the domains ΓÇÿmyaccount.microsoft.comΓÇÖ and ΓÇÿ*.myaccount.microsoft.comΓÇÖ for Terms of Use to continue working as expected.
+++ ## February 2023 ### General Availability - Expanding Privileged Identity Management Role Activation across the Azure portal
Privileged Identity Management (PIM) role activation has been expanded to the Bi
For more information Microsoft cloud settings, see: [Activate my Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-activate-your-roles.md). - ### General Availability - Follow Azure AD best practices with recommendations
For listing your application in the Azure AD app gallery, read the details here
**Service category:** Other **Product capability:** Developer Experience
-As part of our ongoing initiative to improve the developer experience, service reliability, and security of customer applications, we'll end support for the Microsoft Authentication Library (ADAL). The final deadline to migrate your applications to Microsoft Authentication Library (MSAL) has been extended to **June 30, 2023**.
+As part of our ongoing initiative to improve the developer experience, service reliability, and security of customer applications, we'll end support for the Azure Active Directory Authentication Library (ADAL). The final deadline to migrate your applications to Azure Active Directory Authentication Library (MSAL) has been extended to **June 30, 2023**.
### Why are we doing this?
-As we consolidate and evolve the Microsoft Identity platform, we're also investing in making significant improvements to the developer experience and service features that make it possible to build secure, robust and resilient applications. To make these features available to our customers, we needed to update the architecture of our software development kits. As a result of this change, weΓÇÖve decided that the path forward requires us to sunset Azure Active Directory Authentication Library. This allows us to focus on developer experience investments with Microsoft Authentication Library.
+As we consolidate and evolve the Microsoft Identity platform, we're also investing in making significant improvements to the developer experience and service features that make it possible to build secure, robust and resilient applications. To make these features available to our customers, we needed to update the architecture of our software development kits. As a result of this change, weΓÇÖve decided that the path forward requires us to sunset Azure Active Directory Authentication Library. This allows us to focus on developer experience investments with Azure Active Directory Authentication Library.
### What happens?
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md
To enable self-service application access to an application, follow the steps be
1. In the left navigation menu, select **Self-service**. > [!NOTE]
- > The **Self-service** menu item isn't available if your app registration's setting for public client flows is enabled. To access this menu item, select **Authentication** in the left navigation, then set the **Allow public client flows** to **No**.
+ > The **Self-service** menu item isn't available if the corresponding app registration's setting for public client flows is enabled. To access this setting, the app registration needs to exist in your tenant. Locate the app registration, select **Authentication** in the left navigation, then locate **Allow public client flows**.
1. To enable Self-service application access for this application, set **Allow users to request access to this application?** to **Yes.** 1. Next to **To which group should assigned users be added?**, select **Select group**. Choose a group, and then select **Select**. When a user's request is approved, they'll be added to this group. When viewing this group's membership, you'll be able to see who has been granted access to the application through self-service access.
active-directory Delegate App Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-app-roles.md
Previously updated : 11/04/2020 Last updated : 03/30/2023
Tips when creating and using custom roles for delegating application management:
For more information on the basics of custom roles, see the [custom roles overview](custom-overview.md), as well as how to [create a custom role](custom-create.md) and how to [assign a role](custom-assign-powershell.md).
+## Troubleshoot
+
+### Symptom - Access denied when you try to register an application
+
+When you try to register an application in Azure AD, you get a message similar to the following:
+
+```
+Access denied
+You do not have access
+You don't have permission to register applications in the <directoryName> directory. To request access, contact your administrator.
+```
++
+**Cause**
+
+You can't register the application in the directory because your directory administrator has [restricted who can create applications](#restrict-who-can-create-applications).
+
+**Solution**
+
+Contact your administrator to do one of the following:
+
+- Grant you permissions to create and consent to applications by [assigning you the Application Developer role](#grant-individual-permissions-to-create-and-consent-to-applications-when-the-default-ability-is-disabled).
+- Create the application registration for you and [assign you as the application owner](#assign-application-owners).
+ ## Next steps - [Application registration subtypes and permissions](custom-available-permissions.md)
active-directory Github Enterprise Cloud Enterprise Account Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-cloud-enterprise-account-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with GitHub Enterprise Cloud - Enterprise Account'
+ Title: 'Tutorial: Azure Active Directory SSO integration with GitHub Enterprise Cloud - Enterprise Account'
description: Learn how to configure single sign-on between Azure Active Directory and GitHub Enterprise Cloud - Enterprise Account.
Previously updated : 11/21/2022 Last updated : 03/29/2023
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with GitHub Enterprise Cloud - Enterprise Account
+# Tutorial: Azure Active Directory SSO integration with GitHub Enterprise Cloud - Enterprise Account
-In this tutorial, you'll learn how to integrate GitHub Enterprise Cloud - Enterprise Account with Azure Active Directory (Azure AD). When you integrate GitHub Enterprise Cloud - Enterprise Account with Azure AD, you can:
+In this tutorial, you learn how to setup an Azure Active Directory (Azure AD) SAML integration with a GitHub Enterprise Cloud - Enterprise Account. When you integrate GitHub Enterprise Cloud - Enterprise Account with Azure AD, you can:
* Control in Azure AD who has access to a GitHub Enterprise Account and any organizations within the Enterprise Account. - ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
## Scenario description
-In this tutorial, you configure and test Azure AD SSO in a test environment.
+In this tutorial, you will configure a SAML integration for a GitHub Enterprise Account, and test enterprise account owner and enterprise/organization member authentication and access.
+
+> [!NOTE]
+> The GitHub `Enterprise Cloud - Enterprise Account` application does not support enabling [automatic SCIM provisioning](../fundamentals/sync-scim.md). If you need to setup provisioning for your GitHub Enterprise Cloud environment, SAML must be configured at the organization level and the `GitHub Enterprise Cloud - Organization` Azure AD application must be used instead. If you are setting up a SAML and SCIM provisioning integration for an enterprise that is enabled for [Enterprise Managed Users (EMUs)](https://docs.github.com/enterprise-cloud@latest/admin/identity-and-access-management/using-enterprise-managed-users-for-iam/about-enterprise-managed-users), then you must use the `GitHub Enterprise Managed User` Azure AD application for SAML/Provisioning integrations or the `GitHub Enterprise Managed User (OIDC)` Azure AD application for OIDC/Provisioning integrations.
* GitHub Enterprise Cloud - Enterprise Account supports **SP** and **IDP** initiated SSO.
-* GitHub Enterprise Cloud - Enterprise Account supports **Just In Time** user provisioning.
## Adding GitHub Enterprise Cloud - Enterprise Account from the gallery
To configure the integration of GitHub Enterprise Cloud - Enterprise Account int
1. In the **Add from the gallery** section, type **GitHub Enterprise Cloud - Enterprise Account** in the search box. 1. Select **GitHub Enterprise Cloud - Enterprise Account** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
## Configure and test Azure AD SSO for GitHub Enterprise Cloud - Enterprise Account
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://github.com/enterprises/<ENTERPRISE-SLUG>`
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type a URL using the following pattern: `https://github.com/enterprises/<ENTERPRISE-SLUG>/saml/consume`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
In the **Sign on URL** text box, type a URL using the following pattern: `https://github.com/enterprises/<ENTERPRISE-SLUG>/sso`
active-directory Revspace Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/revspace-tutorial.md
+
+ Title: Azure Active Directory SSO integration with RevSpace
+description: Learn how to configure single sign-on between Azure Active Directory and RevSpace.
++++++++ Last updated : 03/28/2023+++
+# Tutorial: Azure Active Directory SSO integration with RevSpace
+
+In this tutorial, you learn how to integrate RevSpace with Azure Active Directory (Azure AD). When you integrate RevSpace with Azure AD, you can:
+
+* Control in Azure AD who has access to RevSpace.
+* Enable your users to be automatically signed-in to RevSpace with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* RevSpace single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* RevSpace supports **SP and IDP** initiated SSO.
+* RevSpace supports **Just In Time** user provisioning.
+
+## Adding RevSpace from the gallery
+
+To configure the integration of RevSpace into Azure AD, you need to add RevSpace from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **RevSpace** in the search box.
+1. Select **RevSpace** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for RevSpace
+
+Configure and test Azure AD SSO with RevSpace using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in RevSpace.
+
+To configure and test Azure AD SSO with RevSpace, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure RevSpace SSO](#configure-revspace-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create RevSpace test user](#create-revspace-test-user)** - to have a counterpart of B.Simon in RevSpace that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **RevSpace** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
+
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [RevSpace Client support team](mailto:support@revspace.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. RevSpace application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, RevSpace application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | - | |
+ | Firstname | user.givenname |
+ | Lastname | user.surname |
+ | jobtitle | user.jobtitle |
+ | department | user.department |
+ | employeeid | user.employeeid |
+ | postalcode | user.postalcode |
+ | country | user.country |
+ | role | user.assignedroles |
+
+ > [!NOTE]
+ > RevSpace expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui).
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up RevSpace** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you enable B.Simon to use Azure single sign-on by granting access to RevSpace.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **RevSpace**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure RevSpace SSO
+
+1. In a different web browser window, sign into RevSpace as an administrator.
+
+1. Click on user Profile icon, then select **Company settings**.
+
+ ![Screenshot of company settings in RevSpace.](./media/teamzskill-tutorial/settings.png)
+
+1. Perform the following steps in **Settings** page.
+
+ ![Screenshot of settings in RevSpace.](./media/teamzskill-tutorial/metadata.png)
+
+ a. Navigate to **Company > Single Sign-On**, then select the **Metadata Upload** tab.
+
+ b. Paste the **Federation Metadata XML** Value, which you've copied from the Azure portal into **XML Metadata** field.
+
+ c. Then click **Save**.
+
+### Create RevSpace test user
+
+In this section, a user called B.Simon is created in RevSpace. RevSpace supports just-in-time provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in RevSpace, a new one is created when you attempt to access RevSpace.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to RevSpace Sign-on URL where you can initiate the login flow.
+
+* Go to RevSpace Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the RevSpace for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the RevSpace tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the RevSpace for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure RevSpace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Signiant Media Shuttle Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/signiant-media-shuttle-tutorial.md
Previously updated : 03/13/2023 Last updated : 03/29/2023 # Azure Active Directory SSO integration with Signiant Media Shuttle
-In this article, you learn how to integrate Signiant Media Shuttle with Azure Active Directory (Azure AD). Media Shuttle is a solution for securely moving large files and data sets to, and from, cloud-based or on-premises storage. Transfers are accelerated and can be up to 100 s of times faster than FTP. When you integrate Signiant Media Shuttle with Azure AD, you can:
+In this article, you learn how to integrate Signiant Media Shuttle with Azure Active Directory (Azure AD). Media Shuttle is a solution for securely moving large files and data sets to, and from, cloud-based or on-premises storage. Transfers are accelerated and can be up to hundreds of times faster than FTP.
+
+When you integrate Signiant Media Shuttle with Azure AD, you can:
* Control in Azure AD who has access to Signiant Media Shuttle. * Enable your users to be automatically signed-in to Signiant Media Shuttle with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-You need to configure and test Azure AD single sign-on for Signiant Media Shuttle in a test environment. Signiant Media Shuttle supports only **SP** initiated single sign-on and **Just In Time** user provisioning.
+You must configure and test Azure AD single sign-on for Signiant Media Shuttle in a test environment. Signiant Media Shuttle supports **SP** initiated single sign-on and **Just In Time** user provisioning.
## Prerequisites
To integrate Azure Active Directory with Signiant Media Shuttle, you need:
* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Signiant Media Shuttle single sign-on (SSO) enabled subscription.
+* A Signiant Media Shuttle subscription with a SAML Web SSO license, and access to the IT and Operations Administration Consoles.
## Add application and assign a test user
Before you begin the process of configuring single sign-on, you need to add the
### Add Signiant Media Shuttle from the Azure AD gallery
-Add Signiant Media Shuttle from the Azure AD application gallery to configure single sign-on with Signiant Media Shuttle. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+Add Signiant Media Shuttle from the Azure AD application gallery to configure single sign-on for Signiant Media Shuttle. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
### Create and assign Azure AD test user Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+You can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
## Configure Azure AD SSO
Complete the following steps to enable Azure AD single sign-on in the Azure port
a. In the **Identifier** textbox, type a value or URL using one of the following patterns:
- | **Identifier** |
- ||
- | `https://<PORTALNAME>.mediashuttle.com` |
- | `mediashuttle` |
+ | **Configuration Type** | **Identifier** |
+ | -- | -- |
+ | Account Level | `mediashuttle` |
+ | Portal Level | `https://<PORTALNAME>.mediashuttle.com` |
b. In the **Reply URL** textbox, type a URL using one of the following patterns:
- | **Reply URL**|
- ||
- | `https://portals.mediashuttle.com/auth` |
- | `https://<PORTALNAME>.mediashuttle.com/auth` |
+ | **Configuration Type** | **Reply URL** |
+ | -- | -- |
+ | Account Level |`https://portals.mediashuttle.com.auth` |
+ | Portal Level | `https://<PORTALNAME>.mediashuttle.com/auth`|
- c. In the **Sign on URL** textbox, type a URL using one of the following patterns:
+ c. In the **Sign on URL** textbox, type a URL using one of the following patterns:
- | **Sign on URL**|
- ||
- | `https://portals.mediashuttle.com/auth` |
- | `https://<PORTALNAME>.mediashuttle.com/auth` |
+ | **Configuration Type** | **Sign on URL** |
+ | - | -- |
+ | Account Level | `https://portals.mediashuttle.com/auth` |
+ | Portal Level | `https://<PORTALNAME>.mediashuttle.com/auth` |
> [!Note] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Signiant Media Shuttle support team](mailto:support@signiant.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure Signiant Media Shuttle SSO
-To configure single sign-on on **Signiant Media Shuttle** side, you need to send the **App Federation Metadata Url** to [Signiant Media Shuttle support team](mailto:support@signiant.com). They set this setting to have the SAML SSO connection set properly on both sides.
+Once you have the **App Federation Metadata Url**, sign in to the Media Shuttle IT Administration Console.
+
+To add Azure AD Metadata in Media Shuttle:
+
+1. Log into your IT Administration Console.
+
+2. On the Security page, in the Identity Provider Metadata field, paste the **App Federation Metadata Url** which you've copied from the Azure portal.
+
+3. Click **Save**.
+
+Once you have set up Azure AD for Media Shuttle, assigned users and groups can sign in to Media Shuttle portals through single sign-on using Azure AD authentication.
### Create Signiant Media Shuttle test user In this section, a user called Britta Simon is created in Signiant Media Shuttle. Signiant Media Shuttle supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Signiant Media Shuttle, a new one is created after authentication.
+If **Auto-add SAML authenticated members to this portal** is not enabled as part of the SAML configuration, you must add users through the **Portal Administration** console at `https://<PORTALNAME>.mediashuttle.com/admin`.
+ ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
The callback endpoint is called when a user scans the QR code, uses the deep lin
| `requestStatus` |string |The status returned when the request was retrieved by the authenticator app. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the presentation flow.</li><li>`presentation_verified`: The verifiable credential validation completed successfully.</li></ul> | | `state` |string| Returns the state value that you passed in the original payload. | | `subject`|string | The verifiable credential user DID.|
-| `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuer's domain. </li><li>The verifiable credential issuer's domain validation status. </li></ul> |
+| `verifiedCredentialsData`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuer's domain. </li><li>The verifiable credential issuer's domain validation status. </li></ul> |
| `receipt`| string | Optional. The receipt contains the original payload sent from the wallet to the Verifiable Credentials service. The receipt should be used for troubleshooting/debugging only. The format in the receipt isn't fix and can change based on the wallet and version used.| The following example demonstrates a callback payload when the authenticator app starts the presentation request:
advisor Advisor Azure Resource Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-azure-resource-graph.md
Advisor resources are now onboarded to [Azure Resource Graph](https://azure.microsoft.com/features/resource-graph/). This lays foundation to many at-scale customer scenarios for Advisor recommendations. Few scenarios that were not possible before to do at scale and now can be achieved using Resource Graph are: * Gives capability to perform complex query for all your subscriptions in Azure portal
-* Recommendations summarized by category types ( like reliability, performance) and impact types (high, medium, low)
+* Recommendations summarized by category types (like reliability, performance) and impact types (high, medium, low)
* All recommendations for a particular recommendation type * Impacted resource count by recommendation category
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
This section provides guidance for cluster administrators who want to create one
| | **Following parameters are only for blobfuse** | | | | |volumeAttributes.secretName | Secret name that stores storage account name and key (only applies for SMB).| | No || |volumeAttributes.secretNamespace | Specify namespace of secret to store account key. | `default` | No | Pvc namespace|
-|nodeStageSecretRef.name | Specify secret name that stores one of the following:<br> `azurestorageaccountkey`<br>`azurestorageaccountsastoken`<br>`msisecret`<br>`azurestoragespnclientsecret`. | |Existing Kubernetes secret name | No |
+|nodeStageSecretRef.name | Specify secret name that stores one of the following:<br> `azurestorageaccountkey`<br>`azurestorageaccountsastoken`<br>`msisecret`<br>`azurestoragespnclientsecret`. | | No |Existing Kubernetes secret name |
|nodeStageSecretRef.namespace | Specify the namespace of secret. | Kubernetes namespace | Yes || | | **Following parameters are only for NFS protocol** | | | | |volumeAttributes.mountPermissions | Specify mounted folder permissions. | `0777` | No ||
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
az keyvault set-policy -n myKeyVaultName -g myResourceGroup --object-id $desIden
Create a **new resource group** and AKS cluster, then use your key to encrypt the OS disk. > [!IMPORTANT]
-> Ensure you create a new resoruce group for your AKS cluster
+> Ensure you create a new resource group for your AKS cluster
```azurecli-interactive # Retrieve the DiskEncryptionSet value and set a variable
someuser@Azure:~$ az account list
] ```
-Create a file called **byok-azure-disk.yaml** that contains the following information. Replace myAzureSubscriptionId, myResourceGroup, and myDiskEncrptionSetName with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed. If you use the Azure Cloud Shell, this file can be created using vi or nano as if working on a virtual or physical system:
+Create a file called **byok-azure-disk.yaml** that contains the following information. Replace *myAzureSubscriptionId*, *myResourceGroup*, and *myDiskEncrptionSetName* with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed.
-```
+```yaml
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
az aks create \
--docker-bridge-address 172.17.0.1/16 \ --vnet-subnet-id $SUBNET_ID ```
-* The *--service-cidr* is optional. This address is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+* The *--service-cidr* is optional. This address is used to assign internal services in the AKS cluster an IP address. This IP address range should be an address space that isn't in use elsewhere in your network environment, including any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection. The default value is 10.0.0.0/16.
-* The *--dns-service-ip* is optional. The address should be the *.10* address of your service IP address range.
+* The *--dns-service-ip* is optional. The address should be the *.10* address of your service IP address range. The default value is 10.0.0.10.
-* The *--pod-cidr* is optional. This address should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection.
+* The *--pod-cidr* is optional. This address should be a large address space that isn't in use elsewhere in your network environment. This range includes any on-premises network ranges if you connect, or plan to connect, your Azure virtual networks using Express Route or a Site-to-Site VPN connection. The default value is 10.244.0.0/16.
* This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed if you need more addresses for additional nodes. * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*. * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
-* The *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network.
+* The *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16.
> [!Note] > If you wish to enable an AKS cluster to include a [Calico network policy][calico-network-policies] you can use the following command.
az aks create \
--node-count 3 \ --network-plugin kubenet \ --vnet-subnet-id $SUBNET_ID \
- --enable-managed-identity \
--assign-identity <identity-resource-id> ```
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
Customizing your node configuration allows you to configure or tune your operating system (OS) settings or the kubelet parameters to match the needs of the workloads. When you create an AKS cluster or add a node pool to your cluster, you can customize a subset of commonly used OS and kubelet settings. To configure settings beyond this subset, [use a daemon set to customize your needed configurations without losing AKS support for your nodes](support-policies.md#shared-responsibility).
-## Use custom node configuration
+## Create an AKS cluster with a customized node configuration
-### Kubelet custom configuration
-The supported Kubelet parameters and accepted values are listed below.
+### Prerequisites for Windows kubelet custom configuration (Preview)
-| Parameter | Allowed values/interval | Default | Description |
-| | -- | - | -- |
-| `cpuManagerPolicy` | none, static | none | The static policy allows containers in [Guaranteed pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) with integer CPU requests access to exclusive CPUs on the node. |
-| `cpuCfsQuota` | true, false | true | Enable/Disable CPU CFS quota enforcement for containers that specify CPU limits. |
-| `cpuCfsQuotaPeriod` | Interval in milliseconds (ms) | `100ms` | Sets CPU CFS quota period value. |
-| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
-| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |
-| `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). |
-| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. |
-| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 MB | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
-| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
-| `podMaxPids` | -1 to kernel PID limit | -1 (∞)| The maximum amount of process IDs that can be running in a Pod |
-
-### Linux OS custom configuration
-
-The supported OS settings and accepted values are listed below.
-
-#### File handle limits
-
-When you're serving a lot of traffic, it's common that the traffic you're serving is coming from a large number of local files. You can tweak the below kernel settings and built-in limits to allow you to handle more, at the cost of some system memory.
-
-| Setting | Allowed values/interval | Default | Description |
-| - | -- | - | -- |
-| `fs.file-max` | 8192 - 12000500 | 709620 | Maximum number of file-handles that the Linux kernel will allocate, by increasing this value you can increase the maximum number of open files permitted. |
-| `fs.inotify.max_user_watches` | 781250 - 2097152 | 1048576 | Maximum number of file watches allowed by the system. Each *watch* is roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel. |
-| `fs.aio-max-nr` | 65536 - 6553500 | 65536 | The aio-nr shows the current system-wide number of asynchronous io requests. aio-max-nr allows you to change the maximum value aio-nr can grow to. |
-| `fs.nr_open` | 8192 - 20000500 | 1048576 | The maximum number of file-handles a process can allocate. |
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+First, install the aks-preview extension by running the following command:
-#### Socket and network tuning
-
-For agent nodes, which are expected to handle very large numbers of concurrent sessions, you can use the subset of TCP and network options below that you can tweak per node pool.
-
-| Setting | Allowed values/interval | Default | Description |
-| - | -- | - | -- |
-| `net.core.somaxconn` | 4096 - 3240000 | 16384 | Maximum number of connection requests that can be queued for any given listening socket. An upper limit for the value of the backlog parameter passed to the [listen(2)](http://man7.org/linux/man-pages/man2/listen.2.html) function. If the backlog argument is greater than the `somaxconn`, then it's silently truncated to this limit.
-| `net.core.netdev_max_backlog` | 1000 - 3240000 | 1000 | Maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them. |
-| `net.core.rmem_max` | 212992 - 134217728 | 212992 | The maximum receive socket buffer size in bytes. |
-| `net.core.wmem_max` | 212992 - 134217728 | 212992 | The maximum send socket buffer size in bytes. |
-| `net.core.optmem_max` | 20480 - 4194304 | 20480 | Maximum ancillary buffer size (option memory buffer) allowed per socket. Socket option memory is used in a few cases to store extra structures relating to usage of the socket. |
-| `net.ipv4.tcp_max_syn_backlog` | 128 - 3240000 | 16384 | The maximum number of queued connection requests that have still not received an acknowledgment from the connecting client. If this number is exceeded, the kernel will begin dropping requests. |
-| `net.ipv4.tcp_max_tw_buckets` | 8000 - 1440000 | 32768 | Maximal number of `timewait` sockets held by system simultaneously. If this number is exceeded, time-wait socket is immediately destroyed and warning is printed. |
-| `net.ipv4.tcp_fin_timeout` | 5 - 120 | 60 | The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it's aborted at the local end. |
-| `net.ipv4.tcp_keepalive_time` | 30 - 432000 | 7200 | How often TCP sends out `keepalive` messages when `keepalive` is enabled. |
-| `net.ipv4.tcp_keepalive_probes` | 1 - 15 | 9 | How many `keepalive` probes TCP sends out, until it decides that the connection is broken. |
-| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
-| `net.ipv4.tcp_tw_reuse` | 0 or 1 | 0 | Allow to reuse `TIME-WAIT` sockets for new connections when it's safe from protocol viewpoint. |
-| `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65000] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. |
-| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. |
-| `net.ipv4.neigh.default.gc_thresh2`| 512 - 90000 | 8192 | Soft maximum number of entries that may be in the ARP cache. This setting is arguably the most important, as ARP garbage collection will be triggered about 5 seconds after reaching this soft maximum. |
-| `net.ipv4.neigh.default.gc_thresh3`| 1024 - 100000 | 16384 | Hard maximum number of entries in the ARP cache. |
-| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
-| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. |
+```azurecli
+az extension add --name aks-preview
+```
-#### Worker limits
+Run the following command to update to the latest version of the extension released:
-Like file descriptor limits, the number of workers or threads that a process can create are limited by both a kernel setting and user limits. The user limit on AKS is unlimited.
+```azurecli
+az extension update --name aks-preview
+```
-| Setting | Allowed values/interval | Default | Description |
-| - | -- | - | -- |
-| `kernel.threads-max` | 20 - 513785 | 55601 | Processes can spin up worker threads. The maximum number of all threads that can be created is set with the kernel setting `kernel.threads-max`. |
+Then register the `WindowsCustomKubeletConfigPreview` feature flag by using the [`az feature register`][az-feature-register] command, as shown in the following example:
-#### Virtual memory
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview"
+```
-The settings below can be used to tune the operation of the virtual memory (VM) subsystem of the Linux kernel and the `writeout` of dirty data to disk.
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [`az feature show`][az-feature-show] command:
-| Setting | Allowed values/interval | Default | Description |
-| - | -- | - | -- |
-| `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. |
-| `vm.vfs_cache_pressure` | 1 - 500 | 100 | This percentage value controls the tendency of the kernel to reclaim the memory, which is used for caching of directory and inode objects. |
-| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. |
-| `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. |
-| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. |
-| `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. |
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "WindowsCustomKubeletConfigPreview"
+```
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [`az provider register`][az-provider-register] command:
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
-> [!IMPORTANT]
-> For ease of search and readability the OS settings are displayed in this document by their name but should be added to the configuration json file or AKS API using [camelCase capitalization convention](/dotnet/standard/design-guidelines/capitalization-conventions).
+### Create config files for kubelet configuration, OS configuration, or both
-Create a `kubeletconfig.json` file with the following contents:
+Create a `linuxkubeletconfig.json` file with the following contents (for Linux node pools):
```json {
Create a `kubeletconfig.json` file with the following contents:
"failSwapOn": false } ```
-Create a `linuxosconfig.json` file with the following contents:
+> [!NOTE]
+> Windows kubelet custom configuration only supports the parameters `imageGcHighThreshold`, `imageGcLowThreshold`, `containerLogMaxSizeMB`, and `containerLogMaxFiles`. The json file contents above should be modified to remove any unsupported parameters.
+
+Create a `windowskubeletconfig.json` file with the following contents (for Windows node pools):
+
+```json
+{
+ "imageGcHighThreshold": 90,
+ "imageGcLowThreshold": 70,
+ "containerLogMaxSizeMB": 20,
+ "containerLogMaxFiles": 6
+}
+```
+
+Create a `linuxosconfig.json` file with the following contents (for Linux node pools only):
```json {
Create a `linuxosconfig.json` file with the following contents:
} ```
-Create a new cluster specifying the kubelet and OS configurations using the JSON files created in the previous step.
+### Create a new cluster using custom configuration files
+
+When creating a new cluster, you can use the customized configuration files created in the previous step to specify the kubelet configuration, OS configuration, or both. Since the first node pool created with az aks create is a linux node pool in all cases, you should use the `linuxkubeletconfig.json` and `linuxosconfig.json` files.
> [!NOTE]
-> When you create a cluster, you can specify the kubelet configuration, OS configuration, or both. If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. CustomKubeletConfig or CustomLinuxOsConfig isn't supported for OS type: Windows.
+> If you specify a configuration when creating a cluster, only the nodes in the initial node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value. CustomLinuxOsConfig isn't supported for OS type: Windows.
```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup --kubelet-config ./kubeletconfig.json --linux-os-config ./linuxosconfig.json
+az aks create --name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json --linux-os-config ./linuxosconfig.json
```
+### Add a node pool using custom configuration files
-Add a new node pool specifying the Kubelet parameters using the JSON file you created.
+When adding a node pool to a cluster, you can use the customized configuration file created in the previous step to specify the kubelet configuration. CustomKubeletConfig is supported for Linux and Windows node pools.
> [!NOTE]
-> When you add a node pool to an existing cluster, you can specify the kubelet configuration, OS configuration, or both. If you specify a configuration when adding a node pool, only the nodes in the new node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value.
+> When you add a Linux node pool to an existing cluster, you can specify the kubelet configuration, OS configuration, or both. When you add a Windows node pool to an existing cluster, you can only specify the kubelet configuration. If you specify a configuration when adding a node pool, only the nodes in the new node pool will have that configuration applied. Any settings not configured in the JSON file will retain the default value.
+
+For Linux node pools
+
+```azurecli
+az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./linuxkubeletconfig.json
+```
+For Windows node pools (Preview)
```azurecli
-az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./kubeletconfig.json
+az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --os-type Windows --kubelet-config ./windowskubeletconfig.json
```
-## Other configuration
+### Other configurations
-The settings below can be used to modify other Operating System settings.
+These settings can be used to modify other operating system settings.
-### Message of the Day
+#### Message of the Day
-Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation.
+Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation.
-#### Cluster creation
+##### Cluster creation
```azurecli az aks create --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt ```
-#### Nodepool creation
+##### Nodepool creation
```azurecli az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt ```
-## Confirm settings have been applied
+### Confirm settings have been applied
After you have applied custom node configuration, you can confirm the settings have been applied to the nodes by [connecting to the host][node-access] and verifying `sysctl` or configuration changes have been made on the filesystem.
+## Custom node configuration supported parameters
+
+## Kubelet custom configuration
+
+Kubelet custom configuration is supported for Linux and Windows node pools. Supported parameters differ and are documented below.
+
+### Linux Kubelet custom configuration
+
+The supported Kubelet parameters and accepted values for Linux node pools are listed below.
+
+| Parameter | Allowed values/interval | Default | Description |
+| | -- | - | -- |
+| `cpuManagerPolicy` | none, static | none | The static policy allows containers in [Guaranteed pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) with integer CPU requests access to exclusive CPUs on the node. |
+| `cpuCfsQuota` | true, false | true | Enable/Disable CPU CFS quota enforcement for containers that specify CPU limits. |
+| `cpuCfsQuotaPeriod` | Interval in milliseconds (ms) | `100ms` | Sets CPU CFS quota period value. |
+| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
+| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |
+| `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). |
+| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. |
+| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
+| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
+| `podMaxPids` | -1 to kernel PID limit | -1 (∞)| The maximum amount of process IDs that can be running in a Pod |
+
+### Windows Kubelet custom configuration (Preview)
+
+The supported Kubelet parameters and accepted values for Windows node pools are listed below.
+
+| Parameter | Allowed values/interval | Default | Description |
+| | -- | - | -- |
+| `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. |
+| `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |
+| `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 | The maximum size (for example, 10 MB) of a container log file before it's rotated. |
+| `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
+
+## Linux OS custom configuration
+
+The supported OS settings and accepted values are listed below.
+
+### File handle limits
+
+When you're serving a lot of traffic, it's common that the traffic you're serving is coming from a large number of local files. You can tweak the below kernel settings and built-in limits to allow you to handle more, at the cost of some system memory.
+
+| Setting | Allowed values/interval | Default | Description |
+| - | -- | - | -- |
+| `fs.file-max` | 8192 - 12000500 | 709620 | Maximum number of file-handles that the Linux kernel will allocate, by increasing this value you can increase the maximum number of open files permitted. |
+| `fs.inotify.max_user_watches` | 781250 - 2097152 | 1048576 | Maximum number of file watches allowed by the system. Each *watch* is roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel. |
+| `fs.aio-max-nr` | 65536 - 6553500 | 65536 | The aio-nr shows the current system-wide number of asynchronous io requests. aio-max-nr allows you to change the maximum value aio-nr can grow to. |
+| `fs.nr_open` | 8192 - 20000500 | 1048576 | The maximum number of file-handles a process can allocate. |
+
+### Socket and network tuning
+
+For agent nodes, which are expected to handle very large numbers of concurrent sessions, you can use the subset of TCP and network options below that you can tweak per node pool.
+
+| Setting | Allowed values/interval | Default | Description |
+| - | -- | - | -- |
+| `net.core.somaxconn` | 4096 - 3240000 | 16384 | Maximum number of connection requests that can be queued for any given listening socket. An upper limit for the value of the backlog parameter passed to the [listen(2)](http://man7.org/linux/man-pages/man2/listen.2.html) function. If the backlog argument is greater than the `somaxconn`, then it's silently truncated to this limit.
+| `net.core.netdev_max_backlog` | 1000 - 3240000 | 1000 | Maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them. |
+| `net.core.rmem_max` | 212992 - 134217728 | 212992 | The maximum receive socket buffer size in bytes. |
+| `net.core.wmem_max` | 212992 - 134217728 | 212992 | The maximum send socket buffer size in bytes. |
+| `net.core.optmem_max` | 20480 - 4194304 | 20480 | Maximum ancillary buffer size (option memory buffer) allowed per socket. Socket option memory is used in a few cases to store extra structures relating to usage of the socket. |
+| `net.ipv4.tcp_max_syn_backlog` | 128 - 3240000 | 16384 | The maximum number of queued connection requests that have still not received an acknowledgment from the connecting client. If this number is exceeded, the kernel will begin dropping requests. |
+| `net.ipv4.tcp_max_tw_buckets` | 8000 - 1440000 | 32768 | Maximal number of `timewait` sockets held by system simultaneously. If this number is exceeded, time-wait socket is immediately destroyed and warning is printed. |
+| `net.ipv4.tcp_fin_timeout` | 5 - 120 | 60 | The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it's aborted at the local end. |
+| `net.ipv4.tcp_keepalive_time` | 30 - 432000 | 7200 | How often TCP sends out `keepalive` messages when `keepalive` is enabled. |
+| `net.ipv4.tcp_keepalive_probes` | 1 - 15 | 9 | How many `keepalive` probes TCP sends out, until it decides that the connection is broken. |
+| `net.ipv4.tcp_keepalive_intvl` | 1 - 75 | 75 | How frequently the probes are sent out. Multiplied by `tcp_keepalive_probes` it makes up the time to kill a connection that isn't responding, after probes started. |
+| `net.ipv4.tcp_tw_reuse` | 0 or 1 | 0 | Allow to reuse `TIME-WAIT` sockets for new connections when it's safe from protocol viewpoint. |
+| `net.ipv4.ip_local_port_range` | First: 1024 - 60999 and Last: 32768 - 65000] | First: 32768 and Last: 60999 | The local port range that is used by TCP and UDP traffic to choose the local port. Comprised of two numbers: The first number is the first local port allowed for TCP and UDP traffic on the agent node, the second is the last local port number. |
+| `net.ipv4.neigh.default.gc_thresh1`| 128 - 80000 | 4096 | Minimum number of entries that may be in the ARP cache. Garbage collection won't be triggered if the number of entries is below this setting. |
+| `net.ipv4.neigh.default.gc_thresh2`| 512 - 90000 | 8192 | Soft maximum number of entries that may be in the ARP cache. This setting is arguably the most important, as ARP garbage collection will be triggered about 5 seconds after reaching this soft maximum. |
+| `net.ipv4.neigh.default.gc_thresh3`| 1024 - 100000 | 16384 | Hard maximum number of entries in the ARP cache. |
+| `net.netfilter.nf_conntrack_max` | 131072 - 1048576 | 131072 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_max` is the maximum number of nodes in the hash table, that is, the maximum number of connections supported by the `nf_conntrack` module or the size of connection tracking table. |
+| `net.netfilter.nf_conntrack_buckets` | 65536 - 147456 | 65536 | `nf_conntrack` is a module that tracks connection entries for NAT within Linux. The `nf_conntrack` module uses a hash table to record the *established connection* record of the TCP protocol. `nf_conntrack_buckets` is the size of hash table. |
+
+### Worker limits
+
+Like file descriptor limits, the number of workers or threads that a process can create are limited by both a kernel setting and user limits. The user limit on AKS is unlimited.
+
+| Setting | Allowed values/interval | Default | Description |
+| - | -- | - | -- |
+| `kernel.threads-max` | 20 - 513785 | 55601 | Processes can spin up worker threads. The maximum number of all threads that can be created is set with the kernel setting `kernel.threads-max`. |
+
+### Virtual memory
+
+The settings below can be used to tune the operation of the virtual memory (VM) subsystem of the Linux kernel and the `writeout` of dirty data to disk.
+
+| Setting | Allowed values/interval | Default | Description |
+| - | -- | - | -- |
+| `vm.max_map_count` | 65530 - 262144 | 65530 | This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling `malloc`, directly by `mmap`, `mprotect`, and `madvise`, and also when loading shared libraries. |
+| `vm.vfs_cache_pressure` | 1 - 500 | 100 | This percentage value controls the tendency of the kernel to reclaim the memory, which is used for caching of directory and inode objects. |
+| `vm.swappiness` | 0 - 100 | 60 | This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. |
+| `swapFileSizeMB` | 1 MB - Size of the [temporary disk](../virtual-machines/managed-disks-overview.md#temporary-disk) (/dev/sdb) | None | SwapFileSizeMB specifies size in MB of a swap file will be created on the agent nodes from this node pool. |
+| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. |
+| `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. |
+
+> [!IMPORTANT]
+> For ease of search and readability the OS settings are displayed in this document by their name but should be added to the configuration json file or AKS API using [camelCase capitalization convention](/dotnet/standard/design-guidelines/capitalization-conventions).
+ ## Next steps - Learn [how to configure your AKS cluster](cluster-configuration.md).
After you have applied custom node configuration, you can confirm the settings h
[az-aks-update]: /cli/azure/aks#az-aks-update [az-aks-scale]: /cli/azure/aks#az-aks-scale [az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
[az-feature-list]: /cli/azure/feature#az-feature-list [az-provider-register]: /cli/azure/provider#az-provider-register [upgrade-cluster]: upgrade-cluster.md
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
To control image versions, you'll want to import them into your own Azure Contai
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
-SOURCE_REGISTRY=k8s.gcr.io
+SOURCE_REGISTRY=registry.k8s.io
CONTROLLER_IMAGE=ingress-nginx/controller CONTROLLER_TAG=v1.2.1 PATCH_IMAGE=ingress-nginx/kube-webhook-certgen
To control image versions, you'll want to import them into your own Azure Contai
```azurepowershell-interactive $RegistryName = "<REGISTRY_NAME>" $ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
-$SourceRegistry = "k8s.gcr.io"
+$SourceRegistry = "registry.k8s.io"
$ControllerImage = "ingress-nginx/controller" $ControllerTag = "v1.2.1" $PatchImage = "ingress-nginx/kube-webhook-certgen"
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
helm repo update
# Install the cert-manager Helm chart helm install cert-manager jetstack/cert-manager \ --namespace ingress-basic \
- --version $CERT_MANAGER_TAG \
+ --version=$CERT_MANAGER_TAG \
--set installCRDs=true \ --set nodeSelector."kubernetes\.io/os"=linux \ --set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \
aks Kubernetes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-portal.md
Title: Access Kubernetes resources from the Azure portal description: Learn how to interact with Kubernetes resources to manage an Azure Kubernetes Service (AKS) cluster from the Azure portal. Previously updated : 12/16/2020 Last updated : 03/30/2023 # Access Kubernetes resources from the Azure portal The Azure portal includes a Kubernetes resource view for easy access to the Kubernetes resources in your Azure Kubernetes Service (AKS) cluster. Viewing Kubernetes resources from the Azure portal reduces context switching between the Azure portal and the `kubectl` command-line tool, streamlining the experience for viewing and editing your Kubernetes resources. The resource viewer currently includes multiple resource types, such as deployments, pods, and replica sets.
-The Kubernetes resource view from the Azure portal replaces the AKS dashboard add-on, which is deprecated.
+The Kubernetes resource view from the Azure portal replaces the deprecated AKS dashboard add-on.
## Prerequisites
-To view Kubernetes resources in the Azure portal, you need an AKS cluster. Any cluster is supported, but if using Azure Active Directory (Azure AD) integration, your cluster must use [AKS-managed Azure AD integration][aks-managed-aad]. If your cluster uses legacy Azure AD, you can upgrade your cluster in the portal or with the [Azure CLI][cli-aad-upgrade]. You can also [use the Azure portal][aks-quickstart-portal] to create a new AKS cluster.
+To view Kubernetes resources in the Azure portal, you need an AKS cluster. Any cluster is supported, but if you're using Azure Active Directory (Azure AD) integration, your cluster must use [AKS-managed Azure AD integration][aks-managed-aad]. If your cluster uses legacy Azure AD, you can upgrade your cluster in the portal or with the [Azure CLI][cli-aad-upgrade]. You can also [use the Azure portal][aks-quickstart-portal] to create a new AKS cluster.
## View Kubernetes resources
To see the Kubernetes resources, navigate to your AKS cluster in the Azure porta
In this example, we'll use our sample AKS cluster to deploy the Azure Vote application from the [AKS quickstart][aks-quickstart-portal].
-1. Select **Add** from any of the resource views (Namespace, Workloads, Services and ingresses, Storage, or Configuration).
-1. Paste the YAML for the Azure Vote application from the [AKS quickstart][aks-quickstart-portal].
-1. Select **Add** at the bottom of the YAML editor to deploy the application.
+1. From the **Services and ingresses** resource view, select **Create** > **Starter application**.
+2. Under **Create a basic web application**, select **Create**.
+3. On the **Application details** page, select **Next**.
+4. On the **Review YAML** page, select **Deploy**.
-Once the YAML file is added, the resource viewer shows both Kubernetes services that were created: the internal service (azure-vote-back), and the external service (azure-vote-front) to access the Azure Vote application. The external service includes a linked external IP address so you can easily view the application in your browser.
+Once the application is deployed, the resource view shows the two Kubernetes
+
+- **azure-vote-back**: The internal service.
+- **azure-vote-front**: The external service, which includes a linked external IP address so you can view the application in your browser.
:::image type="content" source="media/kubernetes-portal/portal-services.png" alt-text="Azure Vote application information displayed in the Azure portal." lightbox="media/kubernetes-portal/portal-services.png"::: ### Monitor deployment insights
-AKS clusters with [Container insights][enable-monitor] enabled can quickly view deployment and other insights. From the Kubernetes resources view, users can see the live status of individual deployments, including CPU and memory usage, as well as transition to Azure monitor for more in-depth information about specific nodes and containers. Here's an example of deployment insights from a sample AKS cluster:
+AKS clusters with [Container insights][enable-monitor] enabled can quickly view deployment and other insights. From the Kubernetes resources view, you can see the live status of individual deployments, including CPU and memory usage. You can also go to Azure Monitor for more in-depth information about specific nodes and containers.
+
+Here's an example of deployment insights from a sample AKS cluster:
:::image type="content" source="media/kubernetes-portal/deployment-insights.png" alt-text="Deployment insights displayed in the Azure portal." lightbox="media/kubernetes-portal/deployment-insights.png":::
The Kubernetes resource view also includes a YAML editor. A built-in YAML editor
:::image type="content" source="media/kubernetes-portal/service-editor.png" alt-text="YAML editor for a Kubernetes service displayed in the Azure portal.":::
-After editing the YAML, changes are applied by selecting **Review + save**, confirming the changes, and then saving again.
+To edit a YAML file for one of your resources, see the following steps:
+
+1. Navigate to your resource in the Azure portal.
+2. Select **YAML** and make your desired edits.
+3. Select **Review + save** > **Confirm manifest changes** > **Save**.
>[!WARNING]
-> Performing direct production changes via UI or CLI is not recommended, you should leverage [continuous integration (CI) and continuous deployment (CD) best practices](kubernetes-action.md). The Azure Portal Kubernetes management capabilities and the YAML editor are built for learning and flighting new deployments in a development and testing setting.
+> We don't recommend performing direct production changes via UI or CLI. Instead, you should leverage [continuous integration (CI) and continuous deployment (CD) best practices](kubernetes-action.md). The Azure portal Kubernetes management capabilities, such as the YAML editor, are built for learning and flighting new deployments in a development and testing setting.
## Troubleshooting
This section addresses common problems and troubleshooting steps.
To access the Kubernetes resources, you must have access to the AKS cluster, the Kubernetes API, and the Kubernetes objects. Ensure that you're either a cluster administrator or a user with the appropriate permissions to access the AKS cluster. For more information on cluster security, see [Access and identity options for AKS][concepts-identity]. >[!NOTE]
-> The kubernetes resource view in the Azure Portal is only supported by [managed-AAD enabled clusters](managed-aad.md) or non-AAD enabled clusters. If you are using a managed-AAD enabled cluster, your AAD user or identity needs to have the respective roles/role bindings to access the kubernetes API, in addition to the permission to pull the [user `kubeconfig`](control-kubeconfig-access.md).
+> The Kubernetes resource view in the Azure portal is only supported by [managed-AAD enabled clusters](managed-aad.md) or non-AAD enabled clusters. If you're using a managed-AAD enabled cluster, your AAD user or identity needs to have the respective roles/role bindings to access the Kubernetes API and the permission to pull the [user `kubeconfig`](control-kubeconfig-access.md).
### Enable resource view
For existing clusters, you may need to enable the Kubernetes resource view. To e
### [Azure CLI](#tab/azure-cli) > [!TIP]
-> The AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. Another option for such clusters is updating `--api-server-authorized-ip-ranges` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with below command or by searching "what is my IP address" in an internet browser.
+> You can add the AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) to limit API server access to only the firewall's public endpoint. Another option is to update the `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with the following command or you can search "what is my IP address" in your browser.
```bash # Retrieve your IP address
az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/3
### [Azure PowerShell](#tab/azure-powershell) > [!TIP]
-> The AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. Another option for such clusters is updating `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with below command or by searching "what is my IP address" in an internet browser.
+> You can add the AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) to limit API server access to only the firewall's public endpoint. Another option is to update the `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with the following command or you can search "what is my IP address" in your browser.
```azurepowershell # Retrieve your IP address
Set-AzAksCluster -ResourceGroupName $RG -Name $AKSNAME -ApiServerAccessAuthorize
## Next steps
-This article showed you how to access Kubernetes resources for your AKS cluster. See [Deployments and YAML manifests][deployments] for a deeper understanding of cluster resources and the YAML files that are accessed with the Kubernetes resource viewer.
+This article showed you how to access Kubernetes resources from the Azure portal. For more information on cluster resources, see [Deployments and YAML manifests][deployments].
<!-- LINKS - internal --> [concepts-identity]: concepts-identity.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
[deployments]: concepts-clusters-workloads.md#deployments-and-yaml-manifests [aks-managed-aad]: managed-aad.md [cli-aad-upgrade]: managed-aad.md#upgrade-to-aks-managed-azure-ad-integration
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+
1. Copy in the following YAML definition: ```yaml
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Two [Kubernetes Services][kubernetes-service] are also created:
1. Create a file named `azure-vote.yaml` and copy in the following manifest.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system.
-
+
```yaml apiVersion: apps/v1 kind: Deployment
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
Two Kubernetes Services are also created:
* An internal service for the Redis instance. * An external service to access the Azure Vote application from the internet.
-1. In the Cloud Shell, use an editor to create a file named `azure-vote.yaml`, such as:
- * `code azure-vote.yaml`
- * `nano azure-vote.yaml`, or
- * `vi azure-vote.yaml`.
-
-1. Copy in the following YAML definition:
+1. In the Cloud Shell, open an editor and create a file named `azure-vote.yaml`.
+2. Paste in the following YAML definition:
```yaml apiVersion: apps/v1
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+
1. Copy in the following YAML definition: ```yaml
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Two [Kubernetes Services][kubernetes-service] are also created:
* An external service to access the Azure Vote application from the internet. 1. Create a file named `azure-vote.yaml`.
- * If you use the Azure Cloud Shell, this file can be created using `code`, `vi`, or `nano` as if working on a virtual or physical system
+
1. Copy in the following YAML definition: ```yaml
aks Stop Api Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-api-upgrade.md
- Title: Stop cluster upgrades on API breaking changes in Azure Kubernetes Service (AKS) (preview)
-description: Learn how to stop minor version change Azure Kubernetes Service (AKS) cluster upgrades on API breaking changes.
--- Previously updated : 03/24/2023--
-# Stop cluster upgrades on API breaking changes in Azure Kubernetes Service (AKS)
-
-To stay within a supported Kubernetes version, you usually have to upgrade your version at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes and deprecations and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
-
-Azure Kubernetes Service (AKS) now supports fail fast on minor version change cluster upgrades. This feature alerts you with an error message if it detects usage on deprecated APIs in the goal version.
--
-## Fail fast on control plane minor version manual upgrades in AKS (preview)
-
-AKS will fail fast on minor version change cluster manual upgrades if it detects usage on deprecated APIs in the goal version. This will only happen if the following criteria are true:
--- It's a minor version change for the cluster control plane.-- Your Kubernetes goal version is >= 1.26.0.-- The PUT MC request uses a preview API version of >= 2023-01-02-preview.-- The usage is performed within the last 1-12 hours. We record usage hourly, so usage within the last hour isn't guaranteed to appear in the detection.-
-If the previous criteria are true and you attempt an upgrade, you'll receive an error message similar to the following example error message:
-
-```
-Bad Request({
-
- "code": "ValidationError",
-
- "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set IgnoreKubernetesDeprecations in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n",
-
- "subcode": "UpgradeBlockedOnDeprecatedAPIUsage"
-
-})
-```
-
-After receiving the error message, you have two options:
--- Remove usage on your end and wait 12 hours for the current record to expire.-- Bypass the validation to ignore API changes.-
-### Remove usage on API breaking changes
-
-Remove usage on API breaking changes using the following steps:
-
-1. Remove the deprecated API, which is listed in the error message.
-2. Wait 12 hours for the current record to expire.
-3. Retry your cluster upgrade.
-
-### Bypass validation to ignore API changes
-
-To bypass validation to ignore API breaking changes, update the `"properties":` block of `Microsoft.ContainerService/ManagedClusters` `PUT` operation with the following settings:
-
-> [!NOTE]
-> The date and time you specify for `"until"` has to be in the future. `Z` stands for timezone. The following example is in GMT. For more information, see [Combined date and time representations](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
-
-```
-{
- "properties": {
- "upgradeSettings": {
- "overrideSettings": {
- "controlPlaneOverrides": [
- "IgnoreKubernetesDeprecations"
- ],
- "until": "2023-04-01T13:00:00Z"
- }
- }
- }
-}
-```
-
-## Next steps
-
-In this article, you learned how AKS detects deprecated APIs before an update is triggered and fails the upgrade operation upfront. To learn more about AKS cluster upgrades, see:
--- [Upgrade an AKS cluster][upgrade-cluster]-- [Use Planned Maintenance to schedule and control upgrades for your AKS clusters (preview)][planned-maintenance-aks]-
-<!-- INTERNAL LINKS -->
-[upgrade-cluster]: upgrade-cluster.md
-[planned-maintenance-aks]: planned-maintenance.md
api-management Authentication Basic Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-basic-policy.md
Use the `authentication-basic` policy to authenticate with a backend service usi
| Attribute | Description | Required | Default | | -- | | -- | - |
-|username|Specifies the username of the Basic credential.|Yes|N/A|
-|password|Specifies the password of the Basic credential.|Yes|N/A|
+|username|Specifies the username of the Basic credential. Policy expressions are allowed. |Yes|N/A|
+|password|Specifies the password of the Basic credential. Policy expressions are allowed. |Yes|N/A|
## Usage
api-management Authentication Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-certificate-policy.md
| Attribute | Description | Required | Default | | -- | | -- | - |
-|thumbprint|The thumbprint for the client certificate.|Either `thumbprint` or `certificate-id` can be present.|N/A|
-|certificate-id|The certificate resource name.|Either `thumbprint` or `certificate-id` can be present.|N/A|
-|body|Client certificate as a byte array. Use if the certificate isn't retrieved from the built-in certificate store.|No|N/A|
-|password|Password for the client certificate.|Use if certificate specified in `body` is password protected.|N/A|
+|thumbprint|The thumbprint for the client certificate. Policy expressions are allowed. |Either `thumbprint` or `certificate-id` can be present.|N/A|
+|certificate-id|The certificate resource name. Policy expressions are allowed.|Either `thumbprint` or `certificate-id` can be present.|N/A|
+|body|Client certificate as a byte array. Use if the certificate isn't retrieved from the built-in certificate store. Policy expressions are allowed.|No|N/A|
+|password|Password for the client certificate. Policy expressions are allowed.|Use if certificate specified in `body` is password protected.|N/A|
## Usage
api-management Authentication Managed Identity Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-managed-identity-policy.md
Both system-assigned identity and any of the multiple user-assigned identities c
| Attribute | Description | Required | Default | | -- | | -- | - | |resource|String. The application ID of the target web API (secured resource) in Azure Active Directory. Policy expressions are allowed. |Yes|N/A|
-|client-id|String. The client ID of the user-assigned identity in Azure Active Directory. Policy expressions are not allowed. |No|system-assigned identity|
-|output-token-variable-name|String. Name of the context variable that will receive token value as an object of type `string`. Policy expresssions are not allowed. |No|N/A|
-|ignore-error|Boolean. If set to `true`, the policy pipeline will continue to execute even if an access token is not obtained.|No|`false`|
+|client-id|String. The client ID of the user-assigned identity in Azure Active Directory. Policy expressions aren't allowed. |No|system-assigned identity|
+|output-token-variable-name|String. Name of the context variable that will receive token value as an object of type `string`. Policy expressions aren't allowed. |No|N/A|
+|ignore-error|Boolean. If set to `true`, the policy pipeline continues to execute even if an access token isn't obtained.|No|`false`|
## Usage
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
| Attribute | Description | Required | Default | | -- | | -- | - |
-| allow-private-response-caching | When set to `true`, allows caching of requests that contain an Authorization header. | No | `false` |
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| downstream-caching-type | This attribute must be set to one of the following values.<br /><br /> - none - downstream caching is not allowed.<br />- private - downstream private caching is allowed.<br />- public - private and shared downstream caching is allowed. | No | none |
-| must-revalidate | When downstream caching is enabled this attribute turns on or off the `must-revalidate` cache control directive in gateway responses. | No | `true` |
-| vary-by-developer | Set to `true` to cache responses per developer account that owns [subscription key](./api-management-subscriptions.md) included in the request. | Yes | `false` |
-| vary-by-developer-groups | Set to `true` to cache responses per [user group](./api-management-howto-create-groups.md). | Yes | `false` |
+| allow-private-response-caching | When set to `true`, allows caching of requests that contain an Authorization header. Policy expressions are allowed. | No | `false` |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise.<br/><br/>Policy expressions aren't allowed. | No | `prefer-external` |
+| downstream-caching-type | This attribute must be set to one of the following values.<br /><br /> - none - downstream caching is not allowed.<br />- private - downstream private caching is allowed.<br />- public - private and shared downstream caching is allowed.<br/><br/>Policy expressions are allowed. | No | none |
+| must-revalidate | When downstream caching is enabled this attribute turns on or off the `must-revalidate` cache control directive in gateway responses. Policy expressions are allowed. | No | `true` |
+| vary-by-developer | Set to `true` to cache responses per developer account that owns [subscription key](./api-management-subscriptions.md) included in the request. Policy expressions are allowed. | Yes | `false` |
+| vary-by-developer-groups | Set to `true` to cache responses per [user group](./api-management-howto-create-groups.md). Policy expressions are allowed. | Yes | `false` |
## Elements
api-management Cache Lookup Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-value-policy.md
Use the `cache-lookup-value` policy to perform cache lookup by key and return a
| Attribute | Description | Required | Default | ||--|--|--|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| default-value | A value that will be assigned to the variable if the cache key lookup resulted in a miss. If this attribute is not specified, `null` is assigned. | No | `null` |
-| key | Cache key value to use in the lookup. | Yes | N/A |
-| variable-name | Name of the [context variable](api-management-policy-expressions.md#ContextVariables) the looked up value will be assigned to, if lookup is successful. If lookup results in a miss, the variable will not be set. | Yes | N/A |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise.<br/><br/>Policy expressions aren't allowed. | No | `prefer-external` |
+| default-value | A value that will be assigned to the variable if the cache key lookup resulted in a miss. If this attribute is not specified, `null` is assigned. Policy expressions are allowed. | No | `null` |
+| key | Cache key value to use in the lookup. Policy expressions are allowed. | Yes | N/A |
+| variable-name | Name of the [context variable](api-management-policy-expressions.md#ContextVariables) the looked up value will be assigned to, if lookup is successful. If lookup results in a miss, the variable will not be set. Policy expressions aren't allowed. | Yes | N/A |
## Usage
api-management Cache Remove Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-remove-value-policy.md
The `cache-remove-value` deletes a cached item identified by its key. The key ca
| Attribute | Description | Required | Default | ||--|--|--|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| key | The key of the previously cached value to be removed from the cache. | Yes | N/A |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. <br/><br/>Policy expressions aren't allowed. | No | `prefer-external` |
+| key | The key of the previously cached value to be removed from the cache. Policy expressions are allowed. | Yes | N/A |
## Usage
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
The `cache-store` policy caches responses according to the specified cache setti
| Attribute | Description | Required | Default | | -- | | -- | - |
-| duration | Time-to-live of the cached entries, specified in seconds. | Yes | N/A |
-| cache-response | Set to `true` to cache the current HTTP response. If the attribute is omitted or set to `false`, only HTTP responses with the status code `200 OK` are cached. | No | `false` |
+| duration | Time-to-live of the cached entries, specified in seconds. Policy expressions are allowed. | Yes | N/A |
+| cache-response | Set to `true` to cache the current HTTP response. If the attribute is omitted or set to `false`, only HTTP responses with the status code `200 OK` are cached. Policy expressions are allowed. | No | `false` |
## Usage
api-management Cache Store Value Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-value-policy.md
The `cache-store-value` performs cache storage by key. The key can have an arbit
| Attribute | Description | Required | Default | ||--|--|--|
-| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` |
-| duration | Value will be cached for the provided duration value, specified in seconds. | Yes | N/A |
-| key | Cache key the value will be stored under. | Yes | N/A |
-| value | The value to be cached. | Yes | N/A |
+| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the [built-in API Management cache](api-management-howto-cache.md),<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise.<br/><br/>Policy expressions aren't allowed.| No | `prefer-external` |
+| duration | Value will be cached for the provided duration value, specified in seconds. Policy expressions are allowed. | Yes | N/A |
+| key | Cache key the value will be stored under. Policy expressions are allowed. | Yes | N/A |
+| value | The value to be cached. Policy expressions are allowed. | Yes | N/A |
## Usage
api-management Check Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/check-header-policy.md
Use the `check-header` policy to enforce that a request has a specified HTTP he
| Attribute | Description | Required | Default | | -- | - | -- | - |
-| name | The name of the HTTP header to check. | Yes | N/A |
-| failed-check-httpcode | HTTP status code to return if the header doesn't exist or has an invalid value. | Yes | N/A |
-| failed-check-error-message | Error message to return in the HTTP response body if the header doesn't exist or has an invalid value. This message must have any special characters properly escaped. | Yes | N/A |
-| ignore-case | Boolean. If set to `true`, case is ignored when the header value is compared against the set of acceptable values. | Yes | N/A |
+| name | The name of the HTTP header to check. Policy expressions are allowed. | Yes | N/A |
+| failed-check-httpcode | HTTP status code to return if the header doesn't exist or has an invalid value. Policy expressions are allowed. | Yes | N/A |
+| failed-check-error-message | Error message to return in the HTTP response body if the header doesn't exist or has an invalid value. This message must have any special characters properly escaped. Policy expressions are allowed. | Yes | N/A |
+| ignore-case | Boolean. If set to `true`, case is ignored when the header value is compared against the set of acceptable values. Policy expressions are allowed. | Yes | N/A |
## Elements
Use the `check-header` policy to enforce that a request has a specified HTTP he
| value | Add one or more of these elements to specify allowed HTTP header values. When multiple `value` elements are specified, the check is considered a success if any one of the values is a match. | No | +++ ## Usage -- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound-- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- **[Policy sections:](./api-management-howto-policies.md#sections)** inbound
+- **[Policy scopes:](./api-management-howto-policies.md#scopes)** global, product, API, operation
- [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted ## Example
Use the `check-header` policy to enforce that a request has a specified HTTP he
* [API Management access restriction policies](api-management-access-restriction-policies.md)
api-management Cors Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cors-policy.md
The `cors` policy adds cross-origin resource sharing (CORS) support to an operat
|Name|Description|Required|Default| |-|--|--|-|
-|allow-credentials|The `Access-Control-Allow-Credentials` header in the preflight response will be set to the value of this attribute and affect the client's ability to submit credentials in cross-domain requests.|No|`false`|
-|terminate-unmatched-request|Controls the processing of cross-origin requests that don't match the policy settings.<br/><br/>When `OPTIONS` request is processed as a preflight request and `Origin` header doesn't match policy settings:<br/> - If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response<br/>- If the attribute is set to `false`, check inbound for other in-scope `cors` policies that are direct children of the inbound element and apply them. If no `cors` policies are found, terminate the request with an empty `200 OK` response. <br/><br/>When `GET` or `HEAD` request includes the `Origin` header (and therefore is processed as a simple cross-origin request), and doesn't match policy settings:<br/>- If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response.<br/> - If the attribute is set to `false`, allow the request to proceed normally and don't add CORS headers to the response.|No|`true`|
+|allow-credentials|The `Access-Control-Allow-Credentials` header in the preflight response will be set to the value of this attribute and affect the client's ability to submit credentials in cross-domain requests. Policy expressions are allowed.|No|`false`|
+|terminate-unmatched-request|Controls the processing of cross-origin requests that don't match the policy settings. Policy expressions are allowed.<br/><br/>When `OPTIONS` request is processed as a preflight request and `Origin` header doesn't match policy settings:<br/> - If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response<br/>- If the attribute is set to `false`, check inbound for other in-scope `cors` policies that are direct children of the inbound element and apply them. If no `cors` policies are found, terminate the request with an empty `200 OK` response. <br/><br/>When `GET` or `HEAD` request includes the `Origin` header (and therefore is processed as a simple cross-origin request), and doesn't match policy settings:<br/>- If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response.<br/> - If the attribute is set to `false`, allow the request to proceed normally and don't add CORS headers to the response.|No|`true`|
## Elements |Name|Description|Required|Default| |-|--|--|-| |allowed-origins|Contains `origin` elements that describe the allowed origins for cross-domain requests. `allowed-origins` can contain either a single `origin` element that specifies `*` to allow any origin, or one or more `origin` elements that contain a URI.|Yes|N/A|
-|origin|The value can be either `*` to allow all origins, or a URI that specifies a single origin. The URI must include a scheme, host, and port.|Yes|If the port is omitted in a URI, port 80 is used for HTTP and port 443 is used for HTTPS.|
|allowed-methods|This element is required if methods other than `GET` or `POST` are allowed. Contains `method` elements that specify the supported HTTP verbs. The value `*` indicates all methods.|No|If this section isn't present, `GET` and `POST` are supported.|
-|method|Specifies an HTTP verb.|At least one `method` element is required if the `allowed-methods` section is present.|N/A|
|allowed-headers|This element contains `header` elements specifying names of the headers that can be included in the request.|Yes|N/A| |expose-headers|This element contains `header` elements specifying names of the headers that will be accessible by the client.|No|N/A|
-|header|Specifies a header name.|At least one `header` element is required in `allowed-headers` or in `expose-headers` if that section is present.|N/A|
> [!CAUTION] > Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+### allowed-origins elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|origin|The value can be either `*` to allow all origins, or a URI that specifies a single origin. The URI must include a scheme, host, and port.|Yes|If the port is omitted in a URI, port 80 is used for HTTP and port 443 is used for HTTPS.|
++ ### allowed-methods attributes |Name|Description|Required|Default| |-|--|--|-|
-|preflight-result-max-age|The `Access-Control-Max-Age` header in the preflight response will be set to the value of this attribute and affect the user agent's ability to cache the preflight response.|No|0|
+|preflight-result-max-age|The `Access-Control-Max-Age` header in the preflight response will be set to the value of this attribute and affect the user agent's ability to cache the preflight response. Policy expressions are allowed.|No|0|
+
+### allowed-methods elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|method|Specifies an HTTP verb. Policy expressions are allowed.|At least one `method` element is required if the `allowed-methods` section is present.|N/A|
+
+### allowed-headers elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|header|Specifies a header name.|At least one `header` element is required in `allowed-headers` if that section is present.|N/A|
+
+### expose-headers elements
+
+|Name|Description|Required|Default|
+|-|--|--|-|
+|header|Specifies a header name.|At least one `header` element is required in `expose-headers` if that section is present.|N/A|
+ ## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
api-management Emit Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/emit-metric-policy.md
The `emit-metric` policy sends custom metrics in the specified format to Applica
| Attribute | Description | Required | Default value | | | -- | | -- |
-| name | A string or policy expression. Name of custom metric. | Yes | N/A |
-| namespace | A string or policy expression. Namespace of custom metric. | No | API Management |
-| value | An integer or policy expression. Value of custom metric. | No | 1 |
+| name | A string. Name of custom metric. Policy expressions aren't allowed. | Yes | N/A |
+| namespace | A string. Namespace of custom metric. Policy expressions aren't allowed. | No | API Management |
+| value | Value of custom metric expressed as an integer. Policy expressions are allowed. | No | 1 |
## Elements
api-management Enable Cors Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/enable-cors-power-platform.md
+
+ Title: Enable CORS policies to test Azure API Management custom connector
+description: How to enable CORS policies in Azure API Management and Power Platform to test a custom connector from Power Platform applications.
+++++ Last updated : 03/24/2023+++
+# Enable CORS policies to test custom connector from Power Platform
+Cross-origin resource sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources. Customers can add a [CORS policy](cors-policy.md) to their web APIs in Azure API Management, which adds cross-origin resource sharing support to an operation or an API to allow cross-domain calls from browser-based clients.
+
+If you've exported an API from API Management as a [custom connector](export-api-power-platform.md) in the Power Platform and want to use the Power Apps or Power Automate test console to call the API, you need to configure your API to explicitly enable cross-origin requests from Power Platform applications. This article shows you how to configure the following two necessary policy settings:
+
+* Add a CORS policy to your API
+
+* Add a policy to your custom connector that sets an Origin header on HTTP requests
+
+## Prerequisites
+++ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)++ Export an API from your API Management instance to a Power Platform environment as a [custom connector](export-api-power-platform.md)+
+## Add CORS policy to API in API Management
+
+Follow these steps to configure the CORS policy in API Management.
+
+1. Sign into [Azure portal](https://portal.azure.com) and go to your API Management instance.
+1. In the left menu, select **APIs** and select the API that you exported as a custom connector. If you want to, select only an API operation to apply the policy to.
+1. In the **Policies** section, in the **Inbound processing** section, select **+ Add policy**.
+ 1. Select **Allow cross-origin resource sharing (CORS)**.
+ 1. Add the following **Allowed origin**: `https://make.powerapps.com`.
+ 1. Select **Save**.
+
+* For more information about configuring a policy, see [Set or edit policies](set-edit-policies.md).
+* For details about the CORS policy, see the [cors](cors-policy.md) policy reference.
+
+> [!NOTE]
+> If you already have an existing CORS policy at the service (all APIs) level to enable the test console of the developer portal, you can add the `https://make.powerapps.com` origin to that policy instead of configuring a separate policy for the API or operation.
+
+> [!NOTE]
+> Depending on how the custom connector gets used in Power Platform applications, you might need to configure additional origins in the CORS policy. If you experience CORS problems when running Power Platform applications, use developer tools in your browser, tracing in API Management, or Application Insights to investigate the issues.
++
+## Add policy to custom connector to set Origin header
+
+Add the following policy to your custom connector in your Power Platform environment. The policy sets an Origin header to match the CORS origin you allowed in API Management.
+
+For details about editing settings of a custom connector, see [Create a custom connector from scratch](/connectors/custom-connectors/define-blank).
+
+1. Sign in to Power Apps or Power Automate.
+1. On the left pane, select **Data** > **Custom Connectors**.
+1. Select your connector from the list of custom connectors.
+1. Select the pencil (Edit) icon to edit the custom connector.
+1. Select **3. Definition**.
+1. In **Policies**, select **+ New policy**. Select or enter the following policy details.
+
+
+ |Setting |Value |
+ |||
+ |Name | A name of your choice, such as **set-origin-header** |
+ |Template | **Set HTTP header** |
+ |Header name | **Origin** |
+ |Header value | `https://make.powerapps.com` (same URL that you configured in API Management) |
+ |Action if header exists | **override** |
+ |Run policy on | **Request** |
+
+ :::image type="content" source="media/enable-cors-power-platform/cors-policy-power-platform.png" alt-text="Screenshot of creating policy in Power Platform custom connector to set an Origin header in HTTP requests.":::
+
+1. Select **Update connector**.
+
+1. After setting the policy, go to the **5. Test** page to test the custom connector.
+
+## Next steps
+
+* [Learn more about the Power Platform](https://powerplatform.microsoft.com/)
+* [Learn more about creating and using custom connectors](/connectors/custom-connectors/)
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-power-platform.md
Previously updated : 08/12/2022 Last updated : 03/24/2023 + # Export APIs from Azure API Management to the Power Platform
This article walks through the steps in the Azure portal to create a custom Powe
## Prerequisites + Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
-+ Make sure there is an API in your API Management instance that you'd like to export to the Power Platform
++ Make sure there's an API in your API Management instance that you'd like to export to the Power Platform + Make sure you have a Power Apps or Power Automate [environment](/powerapps/powerapps-overview#power-apps-for-admins) ## Create a custom connector to an API
This article walks through the steps in the Azure portal to create a custom Powe
:::image type="content" source="media/export-api-power-platform/create-custom-connector.png" alt-text="Create custom connector to API in API Management":::
-Once the connector is created, navigate to your [Power Apps](https://make.powerapps.com) or [Power Automate](https://make.powerautomate.com) environment. You will see the API listed under **Data > Custom Connectors**.
+Once the connector is created, navigate to your [Power Apps](https://make.powerapps.com) or [Power Automate](https://make.powerautomate.com) environment. You'll see the API listed under **Data > Custom Connectors**.
:::image type="content" source="media/export-api-power-platform/custom-connector-power-app.png" alt-text="Custom connector in Power Platform":::
You can manage your custom connector in your Power Apps or Power Platform enviro
1. Select your connector from the list of custom connectors. 1. Select the pencil (Edit) icon to edit and test the custom connector.
-> [!NOTE]
-> To call the API from the Power Apps test console, you need to add the `https://make.powerautomate.com` URL as an origin to the [CORS policy](cors-policy.md) in your API Management instance.
+> [!IMPORTANT]
+> To call the API from the Power Apps test console, you need to configure a CORS policy in your API Management instance and create a policy in the custom connector to set an Origin header in HTTP requests. For more information, see [Enable CORS policies to test custom connector from Power Platform](enable-cors-power-platform.md).
>
-> Depending on how the custom connector gets used when running Power Apps, you might need to configure additional origins in the CORS policy. You can use developer tools in your browser, tracing in API Management, or Application Insights to investigate CORS issues when running Power Apps.
## Update a custom connector
api-management Find And Replace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/find-and-replace-policy.md
The `find-and-replace` policy finds a request or response substring and replaces
| Attribute | Description | Required | Default | | -- | | -- | - |
-|from|The string to search for.|Yes|N/A|
-|to|The replacement string. Specify a zero length replacement string to remove the search string.|Yes|N/A|
+|from|The string to search for. Policy expressions are allowed. |Yes|N/A|
+|to|The replacement string. Specify a zero length replacement string to remove the search string. Policy expressions are allowed. |Yes|N/A|
## Usage
api-management Forward Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/forward-request-policy.md
The `forward-request` policy forwards the incoming request to the backend servic
| Attribute | Description | Required | Default | | | -- | -- | - |
-| timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. | No | 300 |
-| follow-redirects | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. | No | `false` |
+| timeout | The amount of time in seconds to wait for the HTTP response headers to be returned by the backend service before a timeout error is raised. Minimum value is 0 seconds. Values greater than 240 seconds may not be honored, because the underlying network infrastructure can drop idle connections after this time. Policy expressions are allowed. | No | 300 |
+| follow-redirects | Specifies whether redirects from the backend service are followed by the gateway or returned to the caller. Policy expressions are allowed. | No | `false` |
| buffer-request-body | When set to `true`, request is buffered and will be reused on [retry](retry-policy.md). | No | `false` |
-| buffer-response | Affects processing of chunked responses. When set to `false`, each chunk received from the backend is immediately returned to the caller. When set to `true`, chunks are buffered (8 KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to `false` with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. | No | `true` |
-| fail-on-error-status-code | When set to `true`, triggers [on-error](api-management-error-handling-policies.md) section for response codes in the range from 400 to 599 inclusive. | No | `false` |
+| buffer-response | Affects processing of chunked responses. When set to `false`, each chunk received from the backend is immediately returned to the caller. When set to `true`, chunks are buffered (8 KB, unless end of stream is detected) and only then returned to the caller.<br/><br/>Set to `false` with backends such as those implementing [server-sent events (SSE)](how-to-server-sent-events.md) that require content to be returned or streamed immediately to the caller. Policy expressions aren't allowed. | No | `true` |
+| fail-on-error-status-code | When set to `true`, triggers [on-error](api-management-error-handling-policies.md) section for response codes in the range from 400 to 599 inclusive. Policy expressions aren't allowed. | No | `false` |
## Usage
api-management Include Fragment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/include-fragment-policy.md
The policy inserts the policy fragment as-is at the location you select in the p
| Attribute | Description | Required | Default | | | -- | -- | - |
-| fragment-id | A string. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
+| fragment-id | A string. Specifies the identifier (name) of a policy fragment created in the API Management instance. Policy expressions aren't allowed. | Yes | N/A |
## Usage
api-management Invoke Dapr Binding Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/invoke-dapr-binding-policy.md
The policy assumes that Dapr runtime is running in a sidecar container in the sa
| Attribute | Description | Required | Default | |||-||
-| name | Target binding name. Must match the name of the bindings [defined](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#bindings-structure) in Dapr. | Yes | N/A |
-| operation | Target operation name (binding specific). Maps to the [operation](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. | No | None |
-| ignore-error | If set to `true` instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. | No | `false` |
-| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. | No | None |
-| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. | No | 5 |
-| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None |
+| name | Target binding name. Must match the name of the bindings [defined](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#bindings-structure) in Dapr. Policy expressions are allowed. | Yes | N/A |
+| operation | Target operation name (binding specific). Maps to the [operation](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. Policy expressions aren't allowed. | No | None |
+| ignore-error | If set to `true` instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. Policy expressions aren't allowed. | No | `false` |
+| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. Policy expressions aren't allowed. | No | None |
+| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. Policy expressions are allowed.| No | 5 |
+| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None |
| content-type | Type of the message content. "application/json" is the only supported value. | No | None |
+## Elements
+
+| Element | Description | Required |
+||--|-|
+| metadata | Binding specific metadata in the form of key/value pairs. Maps to the [metadata](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. | No |
+| data | Content of the message. Maps to the [data](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/bindings_api.md#invoking-output-bindings) property in Dapr. Policy expressions are allowed. | No |
+ ## Usage - [**Policy sections:**](./api-management-howto-policies.md#sections) inbound, outbound, on-error
api-management Ip Filter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/ip-filter-policy.md
The `ip-filter` policy filters (allows/denies) calls from specific IP addresses
| Attribute | Description | Required | Default | | -- | - | -- | - |
-| address-range from="address" to="address" | A range of IP addresses to allow or deny access for. | Required when the `address-range` element is used. | N/A |
-| action | Specifies whether calls should be allowed (`allow`) or not (`forbid`) for the specified IP addresses and ranges. | Yes | N/A |
+| action | Specifies whether calls should be allowed (`allow`) or not (`forbid`) for the specified IP addresses and ranges. Policy expressions are allowed. | Yes | N/A |
## Elements | Element | Description | Required | | -- | | -- |
-| address | Add one or more of these elements to specify a single IP address on which to filter. | At least one `address` or `address-range` element is required. |
+| address | Add one or more of these elements to specify a single IP address on which to filter. Policy expressions are allowed. | At least one `address` or `address-range` element is required. |
| address-range | Add one or more of these elements to specify a range of IP addresses `from` "address" `to` "address" on which to filter. | At least one `address` or `address-range` element is required. |
api-management Json To Xml Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/json-to-xml-policy.md
The `json-to-xml` policy converts a request or response body from JSON to XML.
| Attribute | Description | Required | Default | | -- | | -- | - |
-|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - always apply conversion.<br />- `content-type-json` - convert only if response Content-Type header indicates presence of JSON.|Yes|N/A|
-|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if XML is requested in request Accept header.<br />- `false` - always apply conversion.|No|`true`|
-|parse-date|When set to `false` date values are simply copied during transformation.|No|`true`|
-|namespace-separator|The character to use as a namespace separator.|No|Underscore|
-|namespace-prefix|The string that identifies property as namespace attribute, usually "xmlns". Properties with names beginning with specified prefix will be added to current element as namespace declarations.|No|N/A|
-|attribute-block-name|When set, properties inside the named object will be added to the element as attributes|No|Not set|
+|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - always apply conversion.<br />- `content-type-json` - convert only if response Content-Type header indicates presence of JSON.<br/><br/>Policy expressions are allowed.|Yes|N/A|
+|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if XML is requested in request Accept header.<br />- `false` - always apply conversion.<br/><br/>Policy expressions are allowed.|No|`true`|
+|parse-date|When set to `false` date values are simply copied during transformation. Policy expressions aren't allowed.|No|`true`|
+|namespace-separator|The character to use as a namespace separator. Policy expressions are allowed.|No|Underscore|
+|namespace-prefix|The string that identifies property as namespace attribute, usually "xmlns". Properties with names beginning with specified prefix will be added to current element as namespace declarations. Policy expressions are allowed.|No|N/A|
+|attribute-block-name|When set, properties inside the named object will be added to the element as attributes. Policy expressions are allowed.|No|Not set|
## Usage
api-management Jsonp Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/jsonp-policy.md
The `jsonp` policy adds JSON with padding (JSONP) support to an operation or an
|Name|Description|Required|Default| |-|--|--|-|
-|callback-parameter-name|The cross-domain JavaScript function call prefixed with the fully qualified domain name where the function resides.|Yes|N/A|
+|callback-parameter-name|The cross-domain JavaScript function call prefixed with the fully qualified domain name where the function resides. Policy expressions are allowed.|Yes|N/A|
## Usage
api-management Limit Concurrency Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/limit-concurrency-policy.md
The `limit-concurrency` policy prevents enclosed policies from executing by more
| Attribute | Description | Required | Default | | | -- | -- | - |
-| key | A string. Policy expression allowed. Specifies the concurrency scope. Can be shared by multiple policies. | Yes | N/A |
-| max-count | An integer. Specifies a maximum number of requests that are allowed to enter the policy. | Yes | N/A |
+| key | A string. Specifies the concurrency scope. Can be shared by multiple policies. Policy expressions are allowed. | Yes | N/A |
+| max-count | An integer. Specifies a maximum number of requests that are allowed to enter the policy. Policy expressions aren't allowed. | Yes | N/A |
## Usage
api-management Log To Eventhub Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/log-to-eventhub-policy.md
The `log-to-eventhub` policy sends messages in the specified format to an event
| Attribute | Description | Required | Default | | - | - | -- | -|
-| logger-id | The ID of the Logger registered with your API Management service. | Yes | N/A |
-| partition-id | Specifies the index of the partition where messages are sent. | Optional. Do not use if `partition-key` is used. | N/A |
-| partition-key | Specifies the value used for partition assignment when messages are sent. | Optional. Do not use if `partition-id` is used. | N/A |
+| logger-id | The ID of the Logger registered with your API Management service. Policy expressions aren't allowed. | Yes | N/A |
+| partition-id | Specifies the index of the partition where messages are sent. Policy expressions aren't allowed. | Optional. Do not use if `partition-key` is used. | N/A |
+| partition-key | Specifies the value used for partition assignment when messages are sent. Policy expressions are allowed. | Optional. Do not use if `partition-id` is used. | N/A |
## Usage
api-management Mock Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-response-policy.md
# Mock response
-The `mock-response` policy, as the name implies, is used to mock APIs and operations. It cancels normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, when available. It generates sample responses from schemas, when schemas are provided and examples are not. If neither examples or schemas are found, responses with no content are returned.
+The `mock-response` policy, as the name implies, is used to mock APIs and operations. It cancels normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, when available. It generates sample responses from schemas, when schemas are provided and examples aren't. If neither examples or schemas are found, responses with no content are returned.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `mock-response` policy, as the name implies, is used to mock APIs and operat
| Attribute | Description | Required | Default | | | -- | -- | - |
-| status-code | Specifies response status code and is used to select corresponding example or schema. | No | 200 |
-| content-type | Specifies `Content-Type` response header value and is used to select corresponding example or schema. | No | None |
+| status-code | Specifies response status code and is used to select corresponding example or schema. Policy expressions aren't allowed. | No | 200 |
+| content-type | Specifies `Content-Type` response header value and is used to select corresponding example or schema. Policy expressions aren't allowed. | No | None |
## Usage
api-management Proxy Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/proxy-policy.md
The `proxy` policy allows you to route requests forwarded to backends via an HTT
| Attribute | Description | Required | Default | | -- | | -- | - |
-| url | Proxy URL in the form of `http://host:port`. | Yes | N/A |
-| username | Username to be used for authentication with the proxy. | No | N/A |
-| password | Password to be used for authentication with the proxy. | No | N/A |
+| url | Proxy URL in the form of `http://host:port`. Policy expressions are allowed. | Yes | N/A |
+| username | Username to be used for authentication with the proxy. Policy expressions are allowed. | No | N/A |
+| password | Password to be used for authentication with the proxy. Policy expressions are allowed. | No | N/A |
## Usage
api-management Publish To Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/publish-to-dapr-policy.md
The policy assumes that Dapr runtime is running in a sidecar container in the sa
| Attribute | Description | Required | Default | |||-||
-| pubsub-name | The name of the target PubSub component. Maps to the [pubsubname](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. If not present, the `topic` attribute value must be in the form of `pubsub-name/topic-name`. | No | None |
-| topic | The name of the topic. Maps to the [topic](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. | Yes | N/A |
-| ignore-error | If set to `true`, instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. | No | `false` |
-| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. | No | None |
-| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. | No | 5 |
+| pubsub-name | The name of the target PubSub component. Maps to the [pubsubname](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. If not present, the `topic` attribute value must be in the form of `pubsub-name/topic-name`. Policy expressions are allowed. | No | None |
+| topic | The name of the topic. Maps to the [topic](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/pubsub_api.md) parameter in Dapr. Policy expressions are allowed. | Yes | N/A |
+| ignore-error | If set to `true`, instructs the policy not to trigger ["on-error"](api-management-error-handling-policies.md) section upon receiving error from Dapr runtime. Policy expressions aren't allowed. | No | `false` |
+| response-variable-name | Name of the [Variables](api-management-policy-expressions.md#ContextVariables) collection entry to use for storing response from Dapr runtime. Policy expressions aren't allowed. | No | None |
+| timeout | Time (in seconds) to wait for Dapr runtime to respond. Can range from 1 to 240 seconds. Policy expressions are allowed. | No | 5 |
| template | Templating engine to use for transforming the message content. "Liquid" is the only supported value. | No | None | | content-type | Type of the message content. "application/json" is the only supported value. | No | None |
api-management Quota By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | - | | - | - |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| counter-key | The key to use for the `quota policy`. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
-| increment-condition | The Boolean expression specifying if the request should be counted towards the quota (`true`) | No | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. When `renewal-period` is set to `0`, the period is set to infinite. | Yes | N/A |
-| first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. | No | `0001-01-01T00:00:00Z` |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed.| Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| counter-key | The key to use for the `quota policy`. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. | Yes | N/A |
+| increment-condition | The Boolean expression specifying if the request should be counted towards the quota (`true`). Policy expressions are allowed. | No | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed. | Yes | N/A |
+| first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. Policy expressions aren't allowed. | No | `0001-01-01T00:00:00Z` |
## Usage
api-management Quota Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | -- | | - | - |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed.| Yes | N/A |
## Elements
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | -- | | - | - | | name | The name of the API for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
-| id | The ID of the API for which to apply the call quota. | Either `name` or `id` must be specified. | N/A |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+| id | The ID of the API for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed.| Yes | N/A |
## operation attributes | Attribute | Description | Required | Default | | -- | | - | - |
-| name | The name of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
-| id | The ID of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
-| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite.| Yes | N/A |
+| name | The name of the operation for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
+| id | The ID of the operation for which to apply the call quota limit. | Either `name` or `id` must be specified. | N/A |
+| bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to the start time of the subscription. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed.| Yes | N/A |
## Usage
To understand the difference between rate limits and quotas, [see Rate limits an
### Usage notes * This policy can be used only once per policy definition.
-* [Policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy.
* This policy is only applied when an API is accessed using a subscription key.
api-management Rate Limit By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | - | -- | -- | - |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expression is allowed. | Yes | N/A |
-| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
-| increment-condition | The Boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A |
-| increment-count | The number by which the counter is increased per request. | No | 1 |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
-| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
-| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
-| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions are allowed. | Yes | N/A |
+| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. | Yes | N/A |
+| increment-condition | The Boolean expression specifying if the request should be counted towards the rate (`true`). Policy expressions are allowed. | No | N/A |
+| increment-count | The number by which the counter is increased per request. Policy expressions are allowed. | No | 1 |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. Policy expressions are allowed. | Yes | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. Policy expressions aren't allowed. | No | `Retry-After` |
+| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. Policy expressions aren't allowed. | No | N/A |
+| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | No | N/A |
+| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | No | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. Policy expressions aren't allowed. | No | N/A |
## Usage
api-management Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| Attribute | Description | Required | Default | | -- | -- | -- | - |
-| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
-| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
-| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | `Retry-After` |
-| retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A |
-| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| remaining-calls-variable-name | The name of a policy expression variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
-| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. | No | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. Policy expressions aren't allowed.| Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. Policy expressions aren't allowed. | Yes | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. Policy expressions aren't allowed. | No | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. Policy expressions aren't allowed. | No | `Retry-After` |
+| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. Policy expressions aren't allowed. | No | N/A |
+| remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. Policy expressions aren't allowed.| No | N/A |
+| remaining-calls-variable-name | The name of a variable that after each policy execution stores the number of remaining calls allowed for the time interval specified in the `renewal-period`. Policy expressions aren't allowed.| No | N/A |
+| total-calls-header-name | The name of a response header whose value is the value specified in `calls`. Policy expressions aren't allowed.| No | N/A |
## Elements
To understand the difference between rate limits and quotas, [see Rate limits an
| -- | -- | -- | - | | name | The name of the API for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A | | id | The ID of the API for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. Policy expressions aren't allowed.| Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. Policy expressions aren't allowed. | Yes | N/A |
### operation attributes
To understand the difference between rate limits and quotas, [see Rate limits an
| -- | -- | -- | - | | name | The name of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A | | id | The ID of the operation for which to apply the rate limit. | Either `name` or `id` must be specified. | N/A |
-| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. Policy expressions aren't allowed.| Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. Policy expressions aren't allowed. | Yes | N/A |
## Usage
To understand the difference between rate limits and quotas, [see Rate limits an
### Usage notes * This policy can be used only once per policy definition.
-* Except where noted, [policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy.
* This policy is only applied when an API is accessed using a subscription key. * [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
api-management Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/retry-policy.md
The `retry` policy executes its child policies once and then retries their execu
| Attribute | Description | Required | Default | | - | -- | -- | - |
-| condition | A Boolean literal or [expression](api-management-policy-expressions.md) specifying if retries should be stopped (`false`) or continued (`true`). | Yes | N/A |
-| count | A positive number specifying the maximum number of retries to attempt. | Yes | N/A |
-| interval | A positive number in seconds specifying the wait interval between the retry attempts. | Yes | N/A |
-| max-interval | A positive number in seconds specifying the maximum wait interval between the retry attempts. It is used to implement an exponential retry algorithm. | No | N/A |
-| delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. | No | N/A |
-| first-fast-retry | If set to `true` , the first retry attempt is performed immediately. | No | `false` |
+| condition | Boolean. Specifies whether retries should be stopped (`false`) or continued (`true`). Policy expressions are allowed. | Yes | N/A |
+| count | A positive number specifying the maximum number of retries to attempt. Policy expressions are allowed. | Yes | N/A |
+| interval | A positive number in seconds specifying the wait interval between the retry attempts. Policy expressions are allowed. | Yes | N/A |
+| max-interval | A positive number in seconds specifying the maximum wait interval between the retry attempts. It is used to implement an exponential retry algorithm. Policy expressions are allowed. | No | N/A |
+| delta | A positive number in seconds specifying the wait interval increment. It is used to implement the linear and exponential retry algorithms. Policy expressions are allowed. | No | N/A |
+| first-fast-retry | Boolean. If set to `true`, the first retry attempt is performed immediately. Policy expressions are allowed. | No | `false` |
## Retry wait times
api-management Return Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/return-response-policy.md
The `return-response` policy cancels pipeline execution and returns either a def
| Attribute | Description | Required | Default | | - | | | - |
-| response-variable-name | The name of the context variable referenced from, for example, an upstream [send-request](send-request-policy.md) policy and containing a `Response` object. | No | N/A |
+| response-variable-name | The name of the context variable referenced from, for example, an upstream [send-request](send-request-policy.md) policy and containing a `Response` object. Policy expressions aren't allowed. | No | N/A |
## Elements
api-management Rewrite Uri Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rewrite-uri-policy.md
Previously updated : 12/08/2022 Last updated : 03/28/2023
The `rewrite-uri` policy converts a request URL from its public form to the form
- Request URL - `http://api.example.com/v2/US/hardware/storenumber&ordernumber?City&State`
-This policy can be used when a human and/or browser-friendly URL should be transformed into the URL format expected by the web service. This policy only needs to be applied when exposing an alternative URL format, such as clean URLs, RESTful URLs, user-friendly URLs or SEO-friendly URLs that are purely structural URLs that do not contain a query string and instead contain only the path of the resource (after the scheme and the authority). This is often done for aesthetic, usability, or search engine optimization (SEO) purposes.
+This policy can be used when a human and/or browser-friendly URL should be transformed into the URL format expected by the web service. This policy only needs to be applied when exposing an alternative URL format, such as clean URLs, RESTful URLs, user-friendly URLs or SEO-friendly URLs that are purely structural URLs that don't contain a query string and instead contain only the path of the resource (after the scheme and the authority). This is often done for aesthetic, usability, or search engine optimization (SEO) purposes.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
This policy can be used when a human and/or browser-friendly URL should be trans
|Name|Description|Required|Default| |-|--|--|-|
-|template|The actual web service URL with any query string parameters. When using expressions, the whole value must be an expression.|Yes|N/A|
-|copy-unmatched-params|Specifies whether query parameters in the incoming request not present in the original URL template are added to the URL defined by the rewrite template.|No|`true`|
+|template|The actual web service URL with any query string parameters. Policy expressions are allowed. When expressions are used, the whole value must be an expression. |Yes|N/A|
+|copy-unmatched-params|Specifies whether query parameters in the incoming request not present in the original URL template are added to the URL defined by the rewrite template. Policy expressions are allowed.|No|`true`|
## Usage
This policy can be used when a human and/or browser-friendly URL should be trans
### Usage notes
-You can only add query string parameters using the policy. You cannot add extra template path parameters in the rewrite URL.
+You can only add query string parameters using the policy. You can't add extra template path parameters in the rewrite URL.
## Example
api-management Send One Way Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md
The `send-one-way-request` policy sends the provided request to the specified UR
| Attribute | Description | Required | Default | | - | -- | -- | -- |
-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. | No | `new` |
-| timeout| The timeout interval in seconds before the call to the URL fails. | No | 60 |
+| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
+| timeout| The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 |
## Elements | Element | Description | Required | | -- | -- | - |
-| set-url | The URL of the request. | No if `mode=copy`; otherwise yes. |
-| set-method | A [set-method](set-method-policy.md) policy statement. | No if `mode=copy`; otherwise yes. |
-| set-header | A [set-header](set-header-policy.md) policy statement. Use multiple `set-header` elements for multiple request headers. | No |
-| set-body | A [set-body](set-body-policy.md) policy statement. | No |
+| set-url | The URL of the request. Policy expressions are allowed. | No if `mode=copy`; otherwise yes. |
+| [set-method](set-method-policy.md) | Sets the method of the request. Policy expressions aren't allowed. | No if `mode=copy`; otherwise yes. |
+| [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No |
+| [set-body](set-body-policy.md) | Sets the body of the request. | No |
| authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |
api-management Send Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md
The `send-request` policy sends the provided request to the specified URL, waiti
| Attribute | Description | Required | Default | | - | -- | -- | -- |
-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. | No | `new` |
-| response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. | Yes | N/A |
-| timeout | The timeout interval in seconds before the call to the URL fails. | No | 60 |
-| ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. | No | `false` |
+| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` |
+| response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. Policy expressions are allowed. | Yes | N/A |
+| timeout | The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 |
+| ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. Policy expressions aren't allowed. | No | `false` |
## Elements | Element | Description | Required | | -- | -- | - |
-| set-url | The URL of the request. | No if `mode=copy`; otherwise yes. |
-| set-method | A [set-method](set-method-policy.md) policy statement. | No if `mode=copy`; otherwise yes. |
-| set-header | A [set-header](set-header-policy.md) policy statement. Use multiple `set-header` elements for multiple request headers. | No |
-| set-body | A [set-body](set-body-policy.md) policy statement. | No |
+| set-url | The URL of the request. Policy expressions are allowed. | No if `mode=copy`; otherwise yes. |
+| [set-method](set-method-policy.md) | Sets the method of the request. Policy expressions aren't allowed. | No if `mode=copy`; otherwise yes. |
+| [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No |
+| [set-body](set-body-policy.md) | Sets the body of the request. | No |
| authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No | | proxy | A [proxy](proxy-policy.md) policy statement. Used to route request via HTTP proxy | No |
api-management Set Backend Service Dapr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-dapr-policy.md
The policy assumes that Dapr runs in a sidecar container in the same pod as the
| Attribute | Description | Required | Default | |||-|| | backend-id | Must be set to "dapr". | Yes | N/A |
-| dapr-app-id | Name of the target microservice. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| Yes | N/A |
-| dapr-method | Name of the method or a URL to invoke on the target microservice. Maps to the [method-name](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| Yes | N/A |
-| dapr-namespace | Name of the namespace the target microservice is residing in. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr.| No | N/A |
+| dapr-app-id | Name of the target microservice. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr. Policy expressions are allowed. | Yes | N/A |
+| dapr-method | Name of the method or a URL to invoke on the target microservice. Maps to the [method-name](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr. Policy expressions are allowed. | Yes | N/A |
+| dapr-namespace | Name of the namespace the target microservice is residing in. Used to form the [appId](https://github.com/dapr/docs/blob/master/daprdocs/content/en/reference/api/service_invocation_api.md) parameter in Dapr. Policy expressions are allowed. | No | N/A |
## Usage
api-management Set Backend Service Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md
Use the `set-backend-service` policy to redirect an incoming request to a differ
| Attribute | Description | Required | Default | | -- | | -- | - |
-|base-url|New backend service base URL.|One of `base-url` or `backend-id` must be present.|N/A|
-|backend-id|Identifier (name) of the backend to route primary or secondary replica of a partition. |One of `base-url` or `backend-id` must be present.|N/A|
-|sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution.|No|N/A|
-|sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. |No|N/A|
-|sf-listener-name|Only applicable when the backend is a Service Fabric service and is specified using `backend-id`. Service Fabric Reliable Services allows you to create multiple listeners in a service. This attribute is used to select a specific listener when a backend Reliable Service has more than one listener. If this attribute isn't specified, API Management will attempt to use a listener without a name. A listener without a name is typical for Reliable Services that have only one listener. |No|N/A|
+|base-url|New backend service base URL. Policy expressions are allowed.|One of `base-url` or `backend-id` must be present.|N/A|
+|backend-id|Identifier (name) of the backend to route primary or secondary replica of a partition. Policy expressions are allowed. |One of `base-url` or `backend-id` must be present.|N/A|
+|sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution. Policy expressions are allowed.|No|N/A|
+|sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. Policy expressions are allowed. |No|N/A|
+|sf-partition-key|Only applicable when the backend is a Service Fabric service. Specifies the partition key of a Service Fabric service. Policy expressions are allowed. |No|N/A|
+|sf-listener-name|Only applicable when the backend is a Service Fabric service and is specified using `backend-id`. Service Fabric Reliable Services allows you to create multiple listeners in a service. This attribute is used to select a specific listener when a backend Reliable Service has more than one listener. If this attribute isn't specified, API Management will attempt to use a listener without a name. A listener without a name is typical for Reliable Services that have only one listener. Policy expressions are allowed.|No|N/A|
## Usage
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
Use the `set-body` policy to set the message body for incoming and outgoing requ
| Attribute | Description | Required | Default | | -- | | -- | - |
-|template|Used to change the templating mode that the `set-body` policy will run in. Currently the only supported value is:<br /><br />- `liquid` - the `set-body` policy will use the liquid templating engine |No| N/A|
-|xsi-nil| Used to control how elements marked with `xsi:nil="true"` are represented in XML payloads. Set to one of the following values:<br /><br />- `blank` - `nil` is represented with an empty string.<br />- `null` - `nil` is represented with a null value.|No | `blank` |
+|template|Used to change the templating mode that the `set-body` policy runs in. Currently the only supported value is:<br /><br />- `liquid` - the `set-body` policy will use the liquid templating engine |No| N/A|
+|xsi-nil| Used to control how elements marked with `xsi:nil="true"` are represented in XML payloads. Set to one of the following values:<br /><br />- `blank` - `nil` is represented with an empty string.<br />- `null` - `nil` is represented with a null value.<br/></br>Policy expressions aren't allowed. |No | `blank` |
For accessing information about the request and response, the Liquid template can bind to a context object with the following properties: <br /> <pre>context.
OriginalUrl.
### Usage notes
+ - If you're using the `set-body` policy to return a new or updated body, you don't need to set `preserveContent` to `true` because you're explicitly supplying the new body contents.
+ - Preserving the content of a response in the inbound pipeline doesn't make sense because there's no response yet.
- Preserving the content of a request in the outbound pipeline doesn't make sense because the request has already been sent to the backend at this point.
+ - If this policy is used when there's no message body, for example in an inbound `GET`, an exception is thrown.
For more information, see the `context.Request.Body`, `context.Response.Body`, and the `IMessageBody` sections in the [Context variable](api-management-policy-expressions.md#ContextVariables) table.
The following Liquid filters are supported in the `set-body` policy. For filter
### Accessing the body as a string
-We are preserving the original request body so that we can access it later in the pipeline.
+We're preserving the original request body so that we can access it later in the pipeline.
```xml <set-body>
We are preserving the original request body so that we can access it later in th
### Accessing the body as a JObject
-Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+Since we're not reserving the original request body, accessing it later in the pipeline will result in an exception.
```xml <set-body> 
This example shows how to perform content filtering by removing data elements fr
``` ### Access the body as URL-encoded form data
-The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), and then converts it to JSON. Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), and then converts it to JSON. Since we're not reserving the original request body, accessing it later in the pipeline will result in an exception.
```xml <set-body> 
The following example uses the `AsFormUrlEncodedContent()` expression to access
``` ### Access and return body as URL-encoded form data
-The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), adds data to the payload, and returns URL-encoded form data. Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), adds data to the payload, and returns URL-encoded form data. Since we're not reserving the original request body, accessing it later in the pipeline will result in an exception.
```xml <set-body> 
api-management Set Header Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-header-policy.md
The `set-header` policy assigns a value to an existing HTTP response and/or requ
|Name|Description|Required|Default| |-|--|--|-|
-|exists-action|Specifies action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing header.<br />- `skip` - does not replace the existing header value.<br />- `append` - appends the value to the existing header value.<br />- `delete` - removes the header from the request.<br /><br /> When set to `override`, enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result.|No|`override`|
-|name|Specifies name of the header to be set.|Yes|N/A|
+|exists-action|Specifies action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing header.<br />- `skip` - does not replace the existing header value.<br />- `append` - appends the value to the existing header value.<br />- `delete` - removes the header from the request.<br /><br /> When set to `override`, enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result. <br/><br/>Policy expressions are allowed.|No|`override`|
+|name|Specifies name of the header to be set. Policy expressions are allowed.|Yes|N/A|
## Elements |Name|Description|Required| |-|--|--|
-|value|Specifies the value of the header to be set. For multiple headers with the same name, add additional `value` elements.|No|
+|value|Specifies the value of the header to be set. Policy expressions are allowed. For multiple headers with the same name, add additional `value` elements.|No|
## Usage
api-management Set Method Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-method-policy.md
The `set-method` policy allows you to change the HTTP request method for a reque
<set-method>HTTP method</set-method> ```
-The value of the element specifies the HTTP method, such as `POST`, `GET`, and so on.
+The value of the element specifies the HTTP method, such as `POST`, `GET`, and so on. Policy expressions are allowed.
## Usage
api-management Set Query Parameter Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-query-parameter-policy.md
The `set-query-parameter` policy adds, replaces value of, or deletes request que
|Name|Description|Required|Default| |-|--|--|-|
-|exists-action|Specifies what action to take when the query parameter is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing parameter.<br />- `skip` - does not replace the existing query parameter value.<br />- `append` - appends the value to the existing query parameter value.<br />- `delete` - removes the query parameter from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the query parameter being set according to all entries (which will be listed multiple times); only listed values will be set in the result.|No|`override`|
-|name|Specifies name of the query parameter to be set.|Yes|N/A|
+|exists-action|Specifies what action to take when the query parameter is already specified. This attribute must have one of the following values.<br /><br /> - `override` - replaces the value of the existing parameter.<br />- `skip` - does not replace the existing query parameter value.<br />- `append` - appends the value to the existing query parameter value.<br />- `delete` - removes the query parameter from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the query parameter being set according to all entries (which will be listed multiple times); only listed values will be set in the result.<br/><br/>Policy expressions are allowed. |No|`override`|
+|name|Specifies name of the query parameter to be set. Policy expressions are allowed. |Yes|N/A|
## Elements |Name|Description|Required| |-|--|--|
-|value|Specifies the value of the query parameter to be set. For multiple query parameters with the same name, add additional `value` elements.|Yes|
+|value|Specifies the value of the query parameter to be set. For multiple query parameters with the same name, add additional `value` elements. Policy expressions are allowed. |Yes|
## Usage
api-management Set Status Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-status-policy.md
The `set-status` policy sets the HTTP status code to the specified value.
| Attribute | Description | Required | Default | | | - | -- | - |
-| code | Integer. The HTTP status code to return. | Yes | N/A |
-| reason | String. A description of the reason for returning the status code. | Yes | N/A |
+| code | Integer. The HTTP status code to return. Policy expressions are allowed. | Yes | N/A |
+| reason | String. A description of the reason for returning the status code. Policy expressions are allowed. | Yes | N/A |
## Usage
api-management Set Variable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-variable-policy.md
The `set-variable` policy declares a [context](api-management-policy-expressions
| Attribute | Description | Required | | | | -- |
-| name | The name of the variable. | Yes |
-| value | The value of the variable. This can be an expression or a literal value. | Yes |
+| name | The name of the variable. Policy expressions aren't allowed. | Yes |
+| value | The value of the variable. This can be an expression or a literal value. Policy expressions are allowed. | Yes |
## Usage
api-management Trace Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/trace-policy.md
The `trace` policy adds a custom trace into the request tracing output in the te
| Attribute | Description | Required | Default | | | - | -- | - |
-| source | String literal meaningful to the trace viewer and specifying the source of the message. | Yes | N/A |
-| severity | Specifies the severity level of the trace. Allowed values are `verbose`, `information`, `error` (from lowest to highest). | No | `verbose` |
+| source | String literal meaningful to the trace viewer and specifying the source of the message. Policy expressions aren't allowed. | Yes | N/A |
+| severity | Specifies the severity level of the trace. Allowed values are `verbose`, `information`, `error` (from lowest to highest). Policy expressions aren't allowed. | No | `verbose` |
## Elements |Name|Description|Required| |-|--|--|
-| message | A string or expression to be logged. | Yes |
+| message | A string or expression to be logged. Policy expressions are allowed. | Yes |
| metadata | Adds a custom property to the Application Insights [Trace](../azure-monitor/app/data-model-complete.md#trace) telemetry. | No | ### metadata attributes
api-management Validate Client Certificate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-client-certificate-policy.md
For more information about custom CA certificates and certificate authorities, s
| Name | Description | Required | Default | | - | --| -- | -- |
-| validate-revocationΓÇ» | Boolean. Specifies whether certificate is validated against online revocation list.ΓÇ» | No | `true` |
-| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case chain cannot be successfully built up to trusted CA. | No | `true` |
-| validate-not-before | Boolean. Validates value against current time. | NoΓÇ»| `true` |
-| validate-not-afterΓÇ» | Boolean. Validates value against current time. | NoΓÇ»| `true`|
-| ignore-errorΓÇ» | Boolean. Specifies if policy should proceed to the next handler or jump to on-error upon failed validation. | No | `false` |
-| identity | String. Combination of certificate claim values that make certificate valid. | Yes | N/A |
+| validate-revocationΓÇ» | Boolean. Specifies whether certificate is validated against online revocation list. Policy expressions aren't allowed.ΓÇ» | No | `true` |
+| validate-trustΓÇ»| Boolean. Specifies if validation should fail in case chain cannot be successfully built up to trusted CA. Policy expressions aren't allowed. | No | `true` |
+| validate-not-before | Boolean. Validates value against current time. Policy expressions aren't allowed.| NoΓÇ»| `true` |
+| validate-not-afterΓÇ» | Boolean. Validates value against current time. Policy expressions aren't allowed.| NoΓÇ»| `true`|
+| ignore-errorΓÇ» | Boolean. Specifies if policy should proceed to the next handler or jump to on-error upon failed validation. Policy expressions aren't allowed. | No | `false` |
## Elements
api-management Validate Content Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-content-policy.md
The policy validates the following content in the request or response against th
| Attribute | Description | Required | Default | | -- | | -- | - |
-| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
-| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
-| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. Policy expressions are allowed. | Yes | N/A |
+| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) Policy expressions are allowed. | Yes | N/A |
+| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. Policy expressions are allowed.| Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A |
## Elements
The policy validates the following content in the request or response against th
| Attribute | Description | Required | Default | | -- | | -- | - |
-| any-content-type-value | Content type used for validation of the body of a request or response, regardless of the incoming content type. | No | N/A |
-| missing-content-type-value | Content type used for validation of the body of a request or response, when the incoming content type is missing or empty. | No | N/A |
+| any-content-type-value | Content type used for validation of the body of a request or response, regardless of the incoming content type. Policy expressions aren't allowed. | No | N/A |
+| missing-content-type-value | Content type used for validation of the body of a request or response, when the incoming content type is missing or empty. Policy expressions aren't allowed. | No | N/A |
### content-type-map-elements
api-management Validate Graphql Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-graphql-request-policy.md
The `validate-graphql-request` policy validates the GraphQL request and authoriz
| Attribute | Description | Required | Default | | -- | | -- | - |
-| error-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| max-size | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
-| max-depth | An integer. Maximum query depth. | No | 6 |
+| error-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions are allowed. | No | N/A |
+| max-size | Maximum size of the request payload in bytes. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) Policy expressions are allowed. | Yes | N/A |
+| max-depth | An integer. Maximum query depth. Policy expressions are allowed. | No | 6 |
## Elements
api-management Validate Headers Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-headers-policy.md
The `validate-headers` policy validates the response headers against the API sch
| Attribute | Description | Required | Default | | -- | | -- | - |
-| specified-header-action | [Action](#actions) to perform for response headers specified in the API schema. | Yes | N/A |
-| unspecified-header-action | [Action](#actions) to perform for response headers that arenΓÇÖt specified in the API schema. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| specified-header-action | [Action](#actions) to perform for response headers specified in the API schema. Policy expressions are allowed. | Yes | N/A |
+| unspecified-header-action | [Action](#actions) to perform for response headers that arenΓÇÖt specified in the API schema. Policy expressions are allowed. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A |
## Elements
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
The `validate-jwt` policy enforces existence and validity of a supported JSON we
| Attribute | Description | Required | Default | | - | | -- | |
-| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. | No | 401 |
-| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
-| require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. | No | true |
-| require-scheme | The name of the token scheme, for example, "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A |
-| require-signed-tokens | Boolean. Specifies whether a token is required to be signed. | No | true |
-| clock-skew | Timespan. Use to specify maximum expected time difference between the system clocks of the token issuer and the API Management instance. | No | 0 seconds |
-| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
+| header-name | The name of the HTTP header holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| query-parameter-name | The name of the query parameter holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. Policy expressions are allowed. | No | 401 |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. Policy expressions are allowed. | No | Default error message depends on validation issue, for example "JWT not present." |
+| require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. Policy expressions are allowed. | No | true |
+| require-scheme | The name of the token scheme, for example, "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. Policy expressions are allowed. | No | N/A |
+| require-signed-tokens | Boolean. Specifies whether a token is required to be signed. Policy expressions are allowed. | No | true |
+| clock-skew | Timespan. Use to specify maximum expected time difference between the system clocks of the token issuer and the API Management instance. Policy expressions are allowed. | No | 0 seconds |
+| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation. Policy expressions aren't allowed. | No | N/A |
api-management Validate Parameters Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-parameters-policy.md
The `validate-parameters` policy validates the header, query, or path parameters
| Attribute | Description | Required | Default | | -- | | -- | - |
-| specified-parameter-action | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
-| unspecified-parameter-action | [Action](#actions) to perform for request parameters that arenΓÇÖt specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
-| name | Name of the parameter to override validation action for. This value is case insensitive. | Yes | N/A |
-| action | [Action](#actions) to perform for the parameter with the matching name. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration.| Yes | N/A |
+| specified-parameter-action | [Action](#actions) to perform for request parameters specified in the API schema. <br/><br/> When provided in a `headers`, `query`, or `path` element, the value overrides the value of `specified-parameter-action` in the `validate-parameters` element. Policy expressions are allowed. | Yes | N/A |
+| unspecified-parameter-action | [Action](#actions) to perform for request parameters that arenΓÇÖt specified in the API schema. <br/><br/>When provided in a `headers`or `query` element, the value overrides the value of `unspecified-parameter-action` in the `validate-parameters` element. Policy expressions are allowed. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A |
## Elements |Name|Description|Required| |-|--|--|
-| headers | Add this element to override default validation [actions](#actions) for header parameters in requests. | No |
-| query | Add this element to override default validation [actions](#actions) for query parameters in requests. | No |
-| path | Add this element to override default validation [actions](#actions) for URL path parameters in requests. | No |
-| parameter | Add one or more elements for named parameters to override higher-level configuration of the validation [actions](#actions). | No |
+| headers | Add this element and one or more `parameter` subelements to override default validation [actions](#actions) for certain named parameters in requests. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration. | No |
+| query | Add this element and one or more `parameter` subelements to override default validation [actions](#actions) for certain named query parameters in requests. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration. | No |
+| path | Add this element and one or more `parameter` subelements to override default validation [actions](#actions) for certain URL path parameters in requests. If the parameter is specified in the API schema, this value overrides the higher-level `specified-parameter-action` configuration. If the parameter isnΓÇÖt specified in the API schema, this value overrides the higher-level `unspecified-parameter-action` configuration. | No |
[!INCLUDE [api-management-validation-policy-actions](../../includes/api-management-validation-policy-actions.md)]
api-management Validate Status Code Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-status-code-policy.md
The `validate-status-code` policy validates the HTTP status codes in responses a
| Attribute | Description | Required | Default | | -- | | -- | - |
-| unspecified-status-code-action | [Action](#actions) to perform for HTTP status codes in responses that arenΓÇÖt specified in the API schema. | Yes | N/A |
-| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A |
+| unspecified-status-code-action | [Action](#actions) to perform for HTTP status codes in responses that arenΓÇÖt specified in the API schema. Policy expressions are allowed. | Yes | N/A |
+| errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. Policy expressions aren't allowed. | No | N/A |
## Elements
api-management Wait Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/wait-policy.md
The `wait` policy executes its immediate child policies in parallel, and waits f
| Attribute | Description | Required | Default | | -- | | -- | - |
-| for | Determines whether the `wait` policy waits for all immediate child policies to be completed or just one. Allowed values are:<br /><br /> - `all` - wait for all immediate child policies to complete<br />- `any` - wait for any immediate child policy to complete. Once the first immediate child policy has completed, the `wait` policy completes and execution of any other immediate child policies is terminated. | No | `all` |
+| for | Determines whether the `wait` policy waits for all immediate child policies to be completed or just one. Allowed values are:<br /><br /> - `all` - wait for all immediate child policies to complete<br />- `any` - wait for any immediate child policy to complete. Once the first immediate child policy has completed, the `wait` policy completes and execution of any other immediate child policies is terminated.<br/><br/>Policy expressions are allowed. | No | `all` |
## Elements
api-management Xml To Json Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xml-to-json-policy.md
The `xml-to-json` policy converts a request or response body from XML to JSON. T
| Attribute | Description | Required | Default | | -- | | -- | - |
-|kind|The attribute must be set to one of the following values.<br /><br /> - `javascript-friendly` - the converted JSON has a form friendly to JavaScript developers.<br />- `direct` - the converted JSON reflects the original XML document's structure.|Yes|N/A|
-|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - convert always.<br />- `content-type-xml` - convert only if response Content-Type header indicates presence of XML.|Yes|N/A|
-|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if JSON is requested in request Accept header.<br />- `false` -always apply conversion.|No|`true`|
+|kind|The attribute must be set to one of the following values.<br /><br /> - `javascript-friendly` - the converted JSON has a form friendly to JavaScript developers.<br />- `direct` - the converted JSON reflects the original XML document's structure.<br/><br/>Policy expressions are allowed.|Yes|N/A|
+|apply|The attribute must be set to one of the following values.<br /><br /> - `always` - convert always.<br />- `content-type-xml` - convert only if response Content-Type header indicates presence of XML.<br/><br/>Policy expressions are allowed.|Yes|N/A|
+|consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if JSON is requested in request Accept header.<br />- `false` -always apply conversion.<br/><br/>Policy expressions are allowed.|No|`true`|
## Usage
api-management Xsl Transform Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xsl-transform-policy.md
Previously updated : 08/26/2022 Last updated : 03/28/2023
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
Title: 'App Service Environment version comparison' description: This article provides an overview of the App Service Environment versions and feature differences between them. Previously updated : 3/20/2023 Last updated : 3/30/2023
App Service Environment has three versions. App Service Environment v3 is the la
|Network watcher or NSG flow logs to monitor traffic |Yes |Yes |Yes | |Subnet delegation |Not required |Not required |[Must be delegated to `Microsoft.Web/hostingEnvironments`](networking.md#subnet-requirements) | |Subnet size|An App Service Environment v1 with no App Service plans uses 12 addresses before you create an app. If you use an ILB App Service Environment v1, then it uses 13 addresses before you create an app. As you scale out, infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances. |An App Service Environment v2 with no App Service plans uses 12 addresses before you create an app. If you use an ILB App Service Environment v2, then it uses 13 addresses before you create an app. As you scale out, infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances. |Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment v3 dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet can be a /27 address space (32 addresses). |
+|DNS fallback |Azure DNS |Azure DNS |[Ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers](migrate.md#migration-feature-limitations) |
### Scaling
application-gateway Application Gateway Private Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-private-deployment.md
+
+ Title: Private Application Gateway deployment (preview)
+
+description: Learn how to restrict access to Application Gateway
++++ Last updated : 03/01/2023+
+#Customer intent: As an administrator, I want to evaluate Azure Private Application Gateway
++
+# Private Application Gateway deployment (preview)
+
+## Introduction
+
+Historically, Application Gateway v2 SKUs, and to a certain extent v1, have required public IP addressing to enable management of the service. This requirement has imposed several limitations in using fine-grain controls in Network Security Groups and Route Tables. Specifically, the following challenges have been observed:
+
+1. All Application Gateways v2 deployments must contain public facing frontend IP configuration to enable communication to the **Gateway Manager** service tag.
+2. Network Security Group associations require rules to allow inbound access from GatewayManager and Outbound access to Internet.
+3. When introducing a default route (0.0.0.0/0) to forward traffic anywhere other than the Internet, metrics, monitoring, and updates of the gateway result in a failed status.
+
+Application Gateway v2 can now address each of these items to further eliminate risk of data exfiltration and control privacy of communication from within the virtual network. These changes include the following capabilities:
+
+1. Private IP address only frontend IP configuration
+ - No public IP address resource required
+2. Elimination of inbound traffic from GatewayManager service tag via Network Security Group
+3. Ability to define a **Deny All** outbound Network Security Group (NSG) rule to restrict egress traffic to the Internet
+4. Ability to override the default route to the Internet (0.0.0.0/0)
+5. DNS resolution via defined resolvers on the virtual network [Learn more](../virtual-network/manage-virtual-network.md#change-dns-servers), including private link private DNS zones.
+
+Each of these features can be configured independently. For example, a public IP address can be used to allow traffic inbound from the Internet and you can define a **_Deny All_** outbound rule in the network security group configuration to prevent data exfiltration.
+
+## Onboard to public preview
+
+The functionality of the new controls of private IP frontend configuration, control over NSG rules, and control over route tables, are currently in public preview. To join the public preview, you can opt in to the experience using the Azure portal, PowerShell, CLI, or REST API.
+
+When you join the preview, all new Application Gateways will provision with the ability to define any combination of the NSG, Route Table, or private IP configuration features. If you wish to opt out from the new functionality and return to the current generally available functionality of Application Gateway, you can do so by [unregistering from the preview](#unregister-from-the-preview).
+
+For more information about preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md)
+
+## Register to the preview
+
+# [Azure Portal](#tab/portal)
+
+Use the following steps to enroll into the public preview for the enhanced Application Gateway network controls via the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. In the search box, enter _subscriptions_ and select **Subscriptions**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="Azure portal search.":::
+
+3. Select the link for your subscription's name.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="Select Azure subscription.":::
+
+4. From the left menu, under **Settings** select **Preview features**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="Azure preview features menu.":::
+
+5. You see a list of available preview features and your current registration status.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="Azure portal list of preview features.":::
+
+6. From **Preview features** type into the filter box **EnableApplicationGatewayNetworkIsolation**, check the feature, and click **Register**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="Azure portal filter preview features.":::
+
+# [Azure PowerShell](#tab/powershell)
+
+To enroll into the public preview for the enhanced Application Gateway network controls via Azure PowerShell, the following commands can be referenced:
+
+```azurepowershell
+Register-AzProviderFeature -FeatureName "EnableApplicationGatewayNetworkIsolation" -ProviderNamespace "Microsoft.Network"
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+FeatureName ProviderName RegistrationState
+-- --
+EnableApplicationGatewayNetworkIsolation Microsoft.Network Registered
+```
+
+# [Azure CLI](#tab/cli)
+
+To enroll into the public preview for the enhanced Application Gateway network controls via Azure CLI, the following commands can be referenced:
+
+```azurecli
+az feature register --name EnableApplicationGatewayNetworkIsolation --namespace Microsoft.Network
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+Name RegistrationState
+- -
+Microsoft.Network/EnableApplicationGatewayNetworkIsolation Registered
+```
+
+A list of all Azure CLI references for Private Link Configuration on Application Gateway can be found here: [Azure CLI CLI - Private Link](/cli/azure/network/application-gateway/private-link)
+++
+For more information about preview features, see [Set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md)
+
+## Unregister from the preview
+
+# [Azure Portal](#tab/portal)
+
+To opt out of the public preview for the enhanced Application Gateway network controls via Portal, use the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. In the search box, enter _subscriptions_ and select **Subscriptions**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/search.png" alt-text="Azure portal search.":::
+
+3. Select the link for your subscription's name.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/subscriptions.png" alt-text="Select Azure subscription.":::
+
+4. From the left menu, under **Settings** select **Preview features**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-menu.png" alt-text="Azure preview features menu.":::
+
+5. You see a list of available preview features and your current registration status.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/preview-features-list.png" alt-text="Azure portal list of preview features.":::
+
+6. From **Preview features** type into the filter box **EnableApplicationGatewayNetworkIsolation**, check the feature, and click **Unregister**.
+
+ :::image type="content" source="../azure-resource-manager/management/media/preview-features/filter.png" alt-text="Azure portal filter preview features.":::
+
+# [Azure PowerShell](#tab/powershell)
+
+To opt out of the public preview for the enhanced Application Gateway network controls via Azure PowerShell, the following commands can be referenced:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName "EnableApplicationGatewayNetworkIsolation" -ProviderNamespace "Microsoft.Network"
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+FeatureName ProviderName RegistrationState
+-- --
+EnableApplicationGatewayNetworkIsolation Microsoft.Network Unregistered
+```
+
+# [Azure CLI](#tab/cli)
+
+To opt out of the public preview for the enhanced Application Gateway network controls via Azure CLI, the following commands can be referenced:
+
+```azurecli
+az feature unregister --name EnableApplicationGatewayNetworkIsolation --namespace Microsoft.Network
+```
+
+To view registration status of the feature, use the Get-AzProviderFeature cmdlet.
+```Output
+Name RegistrationState
+- -
+Microsoft.Network/EnableApplicationGatewayNetworkIsolation Unregistered
+```
+
+A list of all Azure CLI references for Private Link Configuration on Application Gateway can be found here: [Azure CLI CLI - Private Link](/cli/azure/network/application-gateway/private-link)
+++
+## Regions and availability
+
+The Private Application Gateway preview is available to all public cloud regions [where Application Gateway v2 sku is supported](./overview-v2.md#unsupported-regions).
+
+## Configuration of network controls
+
+After registration into the public preview, configuration of NSG, Route Table, and private IP address frontend configuration can be performed using any methods. For example: REST API, ARM Template, Bicep deployment, Terraform, PowerShell, CLI, or Portal. No API or command changes are introduced with this public preview.
+
+## Resource Changes
+
+After your gateway is provisioned, isn't tag is automatically assigned with the name of **EnhancedNetworkControl** and value of **True**. See the following example:
+
+ ![View the EnhancedNetworkControl tag](./media/application-gateway-private-deployment/tags.png)
+
+The resource tag is cosmetic, and serves to confirm that the gateway has been provisioned with the capabilities to configure any combination of the private only gateway features. Modification or deletion of the tag or value doesn't change any functional workings of the gateway.
+
+> [!TIP]
+> The **EnhancedNetworkControl** tag can be helpful when existing Application Gateways were deployed in the subscription prior to feature enablement and you would like to differentiate which gateway can utilize the new functionality.
+
+## Outbound Internet connectivity
+
+Application Gateway deployments that contain only a private frontend IP configuration (do not have a public IP frontend configuration) are not able to egress traffic destined to the Internet. This configuration affects communication to backend targets that are publicly accessible via the Internet.
+
+To enable outbound connectivity from your Application Gateway to an Internet facing backend target, you can utilize [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) or forward traffic to a virtual appliance that has access to the Internet.
+
+Virtual Network NAT offers control over what IP address or prefix should be used as well as configurable idle-timeout. To configure, create a new NAT Gateway with a public IP address or public prefix and associate it with the subnet containing Application Gateway.
+
+If a virtual appliance is required for Internet egress, see the [route table control](#route-table-control) section in this document.
+
+Common scenarios where public IP usage is required:
+- Communication to key vault without use of private endpoints or service endpoints
+ - Outbound communication isn't required for pfx files uploaded to Application Gateway directly
+- Communication to backend targets via Internet
+- Communication to Internet facing CRL or OCSP endpoints
+
+## Network Security Group Control
+
+Network security groups associated to an Application Gateway subnet no longer require inbound rules for GatewayManager, and they don't require outbound access to the Internet. The only required rule is **Allow inbound from AzureLoadBalancer** to ensure health probes can reach the gateway.
+
+The following configuration is an example of the most restrictive set of inbound rules, denying all traffic but Azure health probes. In addition to the defined rules, explicit rules are defined to allow client traffic to reach the listener of the gateway.
+
+ [ ![View the inbound security group rules](./media/application-gateway-private-deployment/inbound-rules.png) ](./media/application-gateway-private-deployment/inbound-rules.png#lightbox)
+
+> [!Note]
+> Application Gateway will display an alert asking to ensure the **Allow LoadBalanceRule** is specified if a **DenyAll** rule inadvertently restricts access to health probes.
+
+### Example scenario
+
+This example walks through creation of an NSG using the Azure portal with the following rules:
+
+- Allow inbound traffic to port 80 and 8080 to Application Gateway from client requests originating from the Internet
+- Deny all other inbound traffic
+- Allow outbound traffic to a backend target in another virtual network
+- Allow outbound traffic to a backend target that is Internet accessible
+- Deny all other outbound traffic
+
+First, [create a network security group](../virtual-network/tutorial-filter-network-traffic.md#create-a-network-security-group). This security group contains your inbound and outbound rules.
+
+#### Inbound rules
+
+Three inbound [default rules](../virtual-network/network-security-groups-overview.md#default-security-rules) are already provisioned in the security group. See the following example:
+
+ [ ![View default security group rules](./media/application-gateway-private-deployment/default-rules.png) ](./media/application-gateway-private-deployment/default-rules.png#lightbox)
+
+Next, create the following four new inbound security rules:
+
+- Allow inbound port 80, tcp, from Internet (any)
+- Allow inbound port 8080, tcp, from Internet (any)
+- Allow inbound from AzureLoadBalancer
+- Deny Any Inbound
+
+To create these rules:
+- Select **Inbound security rules**
+- Select **Add**
+- Enter the following information for each rule into the **Add inbound security rule** pane.
+- When you've entered the information, select **Add** to create the rule.
+- Creation of each rule takes a moment.
+
+| Rule # | Source | Source service tag | Source port ranges | Destination | Service | Dest port ranges | Protocol | Action | Priority | Name |
+| | -- | | | -- | - | - | -- | | -- | - |
+| 1 | Any | | * | Any | HTTP | 80 | TCP | Allow | 1028 | AllowWeb |
+| 2 | Any | | * | Any | Custom | 8080 | TCP | Allow | 1029 | AllowWeb8080 |
+| 3 | Service Tag | AzureLoadBalancer | * | Any | Custom | * | Any | Allow | 1045 | AllowLB |
+| 4 | Any | | * | Any | Custom | * | Any | Deny | 4095 | DenyAllInbound |
++
+Select **Refresh** to review all rules when provisioning is complete.
+
+ [ ![View example inbound security group rules](./media/application-gateway-private-deployment/inbound-example.png) ](./media/application-gateway-private-deployment/inbound-example.png#lightbox)
+
+#### Outbound rules
+
+Three default outbound rules with priority 65000, 65001, and 65500 are already provisioned.
+
+Create the following three new outbound security rules:
+
+- Allow TCP 443 from 10.10.4.0/24 to backend target 20.62.8.49
+- Allow TCP 80 from source 10.10.4.0/24 to destination 10.13.0.4
+- DenyAll traffic rule
+
+These rules are assigned a priority of 400, 401, and 4096, respectively.
+
+> [!NOTE]
+> - 10.10.4.0/24 is the Application Gateway subnet address space.
+> - 10.13.0.4 is a virtual machine in a peered VNet.
+> - 20.63.8.49 is a backend target VM.
+
+To create these rules:
+- Select **Outbound security rules**
+- Select **Add**
+- Enter the following information for each rule into the **Add outbound security rule** pane.
+- When you've entered the information, select **Add** to create the rule.
+- Creation of each rule takes a moment.
+
+| Rule # | Source | Source IP addresses/CIDR ranges | Source port ranges | Destination | Destination IP addresses/CIDR ranges | Service | Dest port ranges | Protocol | Action | Priority | Name |
+| | | - | | | | - | - | -- | | -- | -- |
+| 1 | IP Addresses | 10.10.4.0/24 | * | IP Addresses | 20.63.8.49 | HTTPS | 443 | TCP | Allow | 400 | AllowToBackendTarget |
+| 2 | IP Addresses | 10.10.4.0/24 | * | IP Addresses | 10.13.0.4 | HTTP | 80 | TCP | Allow | 401 | AllowToPeeredVnetVM |
+| 3 | Any | | * | Any | | Custom | * | Any | Deny | 4096 | DenyAll |
+
+Select **Refresh** to review all rules when provisioning is complete.
+
+[ ![View example outbound security group rules](./media/application-gateway-private-deployment/outbound-example.png) ](./media/application-gateway-private-deployment/outbound-example.png#lightbox)
+
+#### Associate NSG to the subnet
+
+The last step is to [associate the network security group to the subnet](../virtual-network/tutorial-filter-network-traffic.md#associate-network-security-group-to-subnet) that contains your Application Gateway.
+
+![Associate NSG to subnet](./media/application-gateway-private-deployment/nsg-subnet.png)
+
+Result:
+
+[ ![View the NSG overview](./media/application-gateway-private-deployment/nsg-overview.png) ](./media/application-gateway-private-deployment/nsg-overview.png#lightbox)
+
+> [!IMPORTANT]
+> Be careful when you define **DenyAll** rules, as you might inadvertently deny inbound traffic from clients to which you intend to allow access. You might also inadvertently deny outbound traffic to the backend target, causing backend health to fail and produce 5XX responses.
+
+## Route Table Control
+
+In the current offering of Application Gateway, association of a route table with a rule (or creation of rule) defined as 0.0.0.0/0 with a next hop as virtual appliance is unsupported to ensure proper management of Application Gateway.
+
+After registration of the public preview feature, the ability to forward traffic to a virtual appliance is now possible via definition of a route table rule that defines 0.0.0.0/0 with a next hop to Virtual Appliance.
+
+Forced Tunneling or learning of 0.0.0.0/0 route through BGP advertising does not affect Application Gateway health, and is honored for traffic flow. This scenario can be applicable when using VPN, ExpressRoute, Route Server, or Virtual WAN.
+
+### Example scenario
+
+In the following example, we create a route table and associate it to the Application Gateway subnet to ensure outbound Internet access from the subnet will egress from a virtual appliance. At a high level, the following design is summarized in Figure 1:
+- The Application Gateway is in spoke virtual network
+- There is a network virtual appliance (a virtual machine) in the hub network
+- A route table with a default route (0.0.0.0/0) to the virtual appliance is associated to Application Gateway subnet
+
+![Diagram for example route table](./media/application-gateway-private-deployment/route-table-diagram.png)
+
+**Figure 1**: Internet access egress through virtual appliance
+
+To create a route table and associate it to the Application Gateway subnet:
+
+1. [Create a route table](../virtual-network/manage-route-table.md#create-a-route-table):
+
+ ![View the newly created route table](./media/application-gateway-private-deployment/route-table-create.png)
+
+2. Select **Routes** and create the next hop rule for 0.0.0.0/0 and configure the destination to be the IP address of your VM:
+
+ [ ![View of adding default route to network virtual applicance](./media/application-gateway-private-deployment/default-route-nva.png) ](./media/application-gateway-private-deployment/default-route-nva.png#lightbox)
+
+3. Select **Subnets** and associate the route table to the Application Gateway subnet:
+
+ [ ![View of associating the route to the AppGW subnet](./media/application-gateway-private-deployment/associate-route-to-subnet.png) ](./media/application-gateway-private-deployment/associate-route-to-subnet.png#lightbox)
+
+4. Validate that traffic is passing through the virtual appliance.
+
+## Limitations / Known Issues
+
+While in public preview, the following limitations are known.
+
+### Private link configuration (preview)
+
+[Private link configuration](private-link.md) support for tunneling traffic through private endpoints to Application Gateway is unsupported with private only gateway.
+
+### Private IP frontend configuration only with AGIC
+
+AGIC v1.7 must be used to introduce support for private frontend IP only.
+
+### Private Endpoint connectivity via Global VNet Peering
+
+If Application Gateway has a backend target or key vault reference to a private endpoint located in a VNet that is accessible via global VNet peering, traffic is dropped, resulting in an unhealthy status.
+
+### Coexisting v2 Application Gateways created prior to enablement of enhanced network control
+
+If a subnet shares Application Gateway v2 deployments that were created both prior to and after enablement of the enhanced network control functionality, Network Security Group (NSG) and Route Table functionality is limited to the prior gateway deployment. Application gateways provisioned prior to enablement of the new functionality must either be reprovisioned, or newly created gateways must use a different subnet to enable enhanced network security group and route table features.
+
+- If a gateway deployed prior to enablement of the new functionality exists in the subnet, you might see errors such as: `For routes associated to subnet containing Application Gateway V2, please ensure '0.0.0.0/0' uses Next Hop Type as 'Internet'` when adding route table entries.
+- When adding network security group rules to the subnet, you might see: `Failed to create security rule 'DenyAnyCustomAnyOutbound'. Error: Network security group \<NSG-name\> blocks outgoing Internet traffic on subnet \<AppGWSubnetId\>, associated with Application Gateway \<AppGWResourceId\>. This isn't permitted for Application Gateways that have fast update enabled or have V2 Sku.`
+
+### Unknown Backend Health status
+
+If backend health is _Unknown_, you may see the following error:
+ + The backend health status could not be retrieved. This happens when an NSG/UDR/Firewall on the application gateway subnet is blocking traffic on ports 65503-65534 in case of v1 SKU, and ports 65200-65535 in case of the v2 SKU or if the FQDN configured in the backend pool could not be resolved to an IP address. To learn more visit - https://aka.ms/UnknownBackendHealth.
+
+This error can be ignored and will be clarified in a future release.
+
+## Next steps
+
+- See [Azure security baseline for Application Gateway](/security/benchmark/azure/baselines/application-gateway-security-baseline.md) for more security best practices.
+
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
Azure Application Gateway by default monitors the health of all resources in its backend pool and automatically removes any resource considered unhealthy from the pool. Application Gateway continues to monitor the unhealthy instances and adds them back to the healthy backend pool once they become available and respond to health probes. By default, Application gateway sends the health probes with the same port that is defined in the backend HTTP settings. A custom probe port can be configured using a custom health probe.
-The source IP address that Application Gateway uses for health probes will depend on the backend pool:
+The source IP address that Application Gateway uses for health probes depend on the backend pool:
- If the server address in the backend pool is a public endpoint, then the source address is the application gateway's frontend public IP address. - If the server address in the backend pool is a private endpoint, then the source IP address is from the application gateway subnet's private IP address space.
-![application gateway probe example][1]
In addition to using default health probe monitoring, you can also customize the health probe to suit your application's requirements. In this article, both default and custom health probes are covered.
In addition to using default health probe monitoring, you can also customize the
An application gateway automatically configures a default health probe when you don't set up any custom probe configuration. The monitoring behavior works by making an HTTP GET request to the IP addresses or FQDN configured in the backend pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS to test health of the backend servers.
-For example: You configure your application gateway to use backend servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like `http://127.0.0.1/`.
+For example: You configure your application gateway to use backend servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30-second-timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe looks like `http://127.0.0.1/`.
If the default probe check fails for server A, the application gateway stops forwarding requests to this server. The default probe still continues to check for server A every 30 seconds. When server A responds successfully to one request from a default health probe, application gateway starts forwarding the requests to the server again.
The following table provides definitions for the properties of a custom health p
| Probe property | Description | | | | | Name |Name of the probe. This name is used to identify and refer to the probe in backend HTTP settings. |
-| Protocol |Protocol used to send the probe. This has to match with the protocol defined in the backend HTTP settings it is associated to|
-| Host |Host name to send the probe with. In v1 SKU, this value will be used only for the host header of the probe request. In v2 SKU, it will be used both as host header as well as SNI |
+| Protocol |Protocol used to send the probe. This has to match with the protocol defined in the backend HTTP settings it's associated to|
+| Host |Host name to send the probe with. In v1 SKU, this value is used only for the host header of the probe request. In v2 SKU, it is used both as host header and SNI |
| Path |Relative path of the probe. A valid path starts with '/' |
-| Port |If defined, this is used as the destination port. Otherwise, it uses the same port as the HTTP settings that it is associated to. This property is only available in the v2 SKU
+| Port |If defined, this is used as the destination port. Otherwise, it uses the same port as the HTTP settings that it's associated to. This property is only available in the v2 SKU
| Interval |Probe interval in seconds. This value is the time interval between two consecutive probes | | Time-out |Probe time-out in seconds. If a valid response isn't received within this time-out period, the probe is marked as failed | | Unhealthy threshold |Probe retry count. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold |
Once the match criteria is specified, it can be attached to probe configuration
## NSG considerations
+Fine grain control over the Application Gateway subnet via NSG rules is possible in public preview. More details can be found [here](application-gateway-private-deployment.md#network-security-group-control).
+
+With current functionality there are some restrictions:
+ You must allow incoming Internet traffic on TCP ports 65503-65534 for the Application Gateway v1 SKU, and TCP ports 65200-65535 for the v2 SKU with the destination subnet as **Any** and source as **GatewayManager** service tag. This port range is required for Azure infrastructure communication. Additionally, outbound Internet connectivity can't be blocked, and inbound traffic coming from the **AzureLoadBalancer** tag must be allowed.
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
You can configure the application gateway to have a public IP address, a private
## Public and private IP address support
-Application Gateway V2 currently doesn't support only private IP mode. It supports the following combinations:
+Application Gateway V2 currently supports the following combinations:
* Private IP address and public IP address * Public IP address only
+* [Private IP address only (preview)](application-gateway-private-deployment.md)
For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address).
-A public IP address isn't required for an internal endpoint that's not exposed to the Internet. That's known as an *internal load-balancer* (ILB) endpoint or private frontend IP. An application gateway ILB is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers in a multi-tier application within a security boundary that aren't exposed to the Internet but that require round-robin load distribution, session stickiness, or TLS termination.
+A public IP address isn't required for an internal endpoint that's not exposed to the Internet. A private frontend configuration is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers in a multi-tier application within a security boundary that aren't exposed to the Internet but that require round-robin load distribution, session stickiness, or TLS termination.
Only one public IP address and one private IP address is supported. You choose the frontend IP when you create the application gateway.
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
By visiting Azure Advisor for your account, you can verify if your subscription
## Network security groups
-Network security groups (NSGs) are supported on Application Gateway. But there are some restrictions:
+Network security groups (NSGs) are supported on Application Gateway.
+
+Fine grain control over the Application Gateway subnet via NSG rules is possible in public preview. More details can be found [here](application-gateway-private-deployment.md#network-security-group-control).
+
+With current functionality there are some restrictions:
- You must allow incoming Internet traffic on TCP ports 65503-65534 for the Application Gateway v1 SKU, and TCP ports 65200-65535 for the v2 SKU with the destination subnet as **Any** and source as **GatewayManager** service tag. This port range is required for Azure infrastructure communication. These ports are protected (locked down) by Azure certificates. External entities, including the customers of those gateways, can't communicate on these endpoints.
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Supported user-defined routes
+Fine grain control over the Application Gateway subnet via Route Table rules is possible in public preview. More details can be found [here](application-gateway-private-deployment.md#route-table-control).
+
+With current functionality there are some restrictions:
+ > [!IMPORTANT] > Using UDRs on the Application Gateway subnet might cause the health status in the [backend health view](./application-gateway-diagnostics.md#backend-health) to appear as **Unknown**. It also might cause generation of Application Gateway logs and metrics to fail. We recommend that you don't use UDRs on the Application Gateway subnet so that you can view the backend health, logs, and metrics.
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Next steps - [Learn about frontend IP address configuration](configuration-frontend-ip.md).
+- [Learn about private Application Gateway deployment](application-gateway-private-deployment.md).
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The following table compares the features available with each SKU.
| Azure Kubernetes Service (AKS) Ingress controller | | &#x2713; | | Azure Key Vault integration | | &#x2713; | | Rewrite HTTP(S) headers | | &#x2713; |
+| Enhanced Network Control (NSG, Route Table, Private IP Frontend only) | | &#x2713; |
| URL-based routing | &#x2713; | &#x2713; | | Multiple-site hosting | &#x2713; | &#x2713; | | Mutual Authentication (mTLS) | | &#x2713; |
This section describes features and limitations of the v2 SKU that differ from t
|Difference|Details| |--|--|
-|Authentication certificate|Not supported.<br>For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku).|
|Mixing Standard_v2 and Standard Application Gateway on the same subnet|Not supported|
-|User-Defined Route (UDR) on Application Gateway subnet|Supported (specific scenarios). In preview.<br> For more information about supported scenarios, see [Application Gateway configuration overview](configuration-infrastructure.md#supported-user-defined-routes).|
-|NSG for Inbound port range| - 65200 to 65535 for Standard_v2 SKU<br>- 65503 to 65534 for Standard SKU.<br>For more information, see the [FAQ](application-gateway-faq.yml#are-network-security-groups-supported-on-the-application-gateway-subnet).|
+|User-Defined Route (UDR) on Application Gateway subnet|For information about supported scenarios, see [Application Gateway configuration overview](configuration-infrastructure.md#supported-user-defined-routes).|
+|NSG for Inbound port range| - 65200 to 65535 for Standard_v2 SKU<br>- 65503 to 65534 for Standard SKU.<br>Not required for v2 SKUs in public preview [Learn more](application-gateway-private-deployment.md).<br>For more information, see the [FAQ](application-gateway-faq.yml#are-network-security-groups-supported-on-the-application-gateway-subnet).|
|Performance logs in Azure diagnostics|Not supported.<br>Azure metrics should be used.|
-|Billing|Billing scheduled to start on July 1, 2019.|
-|FIPS mode|These are currently not supported.|
-|ILB only mode|This is currently not supported. Public and ILB mode together is supported.|
-|Net watcher integration|Not supported.|
+|FIPS mode|Currently not supported.|
+|Private frontend configuration only mode|Currently in public preview [Learn more](application-gateway-private-deployment.md).|
+|Azure Network Watcher integration|Not supported.|
|Microsoft Defender for Cloud integration|Not yet available. ## Migrate from v1 to v2
applied-ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-add-on-capabilities.md
recommendations: false
**This article applies to:** ![Form Recognizer v3.0 checkmark](media/yes-icon.png) **Form Recognizer v3.0**.
-> [!IMPORTANT]
->
-> * The Form Recognizer Studio add-on capabilities feature is currently in gated preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
-> * Complete and submit the [**Form Recognizer private preview request form**](https://aka.ms/form-recognizer/preview/survey) to request access.
- > [!NOTE] > > Add-on capabilities for Form Recognizer Studio are only available within the Read and Layout models for the `2023-02-28-preview` release.
applied-ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-a-custom-classifier.md
Previously updated : 03/03/2023 Last updated : 03/30/2023 monikerRange: 'form-recog-3.0.0' recommendations: false
The Form Recognizer Studio provides and orchestrates all the API calls required
In your project, you only need to label each document with the appropriate class label. You see the files you uploaded to storage in the file list, ready to be labeled. You have a few options to label your dataset.
Congratulations you've trained a custom classification model in the Form Recogni
The [classification model](../concept-custom-classifier.md) requires results from the [layout model](../concept-layout.md) for each training document. If you haven't provided the layout results, the Studio attempts to run the layout model for each document prior to training the classifier. This process is throttled and can result in a 429 response.
-In the Studiio, prior to training with the classification model, run the [layout model](https://formrecognizer.appliedai.azure.com/studio/layout) on each document and upload it to the same location as the original document. Once the layout results are added, you can train the classifier model with your documents.
+In the Studio, prior to training with the classification model, run the [layout model](https://formrecognizer.appliedai.azure.com/studio/layout) on each document and upload it to the same location as the original document. Once the layout results are added, you can train the classifier model with your documents.
## Next steps
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Title: Deploy an agent-based Linux Hybrid Runbook Worker in Automation
description: This article tells how to install an agent-based Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. Previously updated : 03/15/2023 Last updated : 03/30/2023
sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/
## <a name="remove-linux-hybrid-runbook-worker"></a>Remove the Hybrid Runbook Worker
-You can use the command `ls /var/opt/microsoft/omsagent` on the Hybrid Runbook Worker to get the workspace ID. A folder is created that is named with the workspace ID.
+Run the following commands on agent-based Linux Hybrid Worker:
-```bash
-sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessKey>" --groupname="Example" --workspaceid="<workspaceId>"
-```
+1. ```python
+ sudo bash
+ ```
+
+1. ```python
+ rm -r /home/nxautomation
+ ```
+1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers**.
+1. Select the checkbox next to the machine(s) you want to delete from the hybrid worker group.
+1. Select **Delete** to remove the agent-based Linux Hybrid Worker.
++
+ > [!NOTE]
+ > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role.
+ > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+ > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
-> [!NOTE]
-> - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br>
-> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
-> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
## Remove a Hybrid Worker group
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
Modules that are installed must be in a location referenced by the `PSModulePath
## <a name="remove-windows-hybrid-runbook-worker"></a>Remove the Hybrid Runbook Worker
-1. In the Azure portal, go to your Automation account.
+1. Open PowerShell session in Administrator mode and run the following command:
-1. Under **Account Settings**, select **Keys** and note the values for **URL** and **Primary Access Key**.
+ ```powershell-interactive
+ Remove-Item -Path "HKLM:\SOFTWARE\Microsoft\HybridRunbookWorker\<AutomationAccountID>\<HybridWorkerGroupName>" -Force -Verbose
+ ```
+1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers**.
+1. Select the checkbox next to the machine(s) you want to delete from the hybrid worker group.
+1. Select **Delete** to remove the agent-based Windows Hybrid Worker.
-1. Open a PowerShell session in Administrator mode and run the following command with your URL and primary access key values. Use the `Verbose` parameter for a detailed log of the removal process. To remove stale machines from your Hybrid Worker group, use the optional `machineName` parameter.
+ > [!NOTE]
+ > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+ > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
-```powershell-interactive
-Remove-HybridRunbookWorker -Url <URL> -Key <primaryAccessKey> -MachineName <computerName>
-```
-> [!NOTE]
-> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
-> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
## Remove a Hybrid Worker group
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
Title: Migrate an existing agent-based hybrid workers to extension-based-workers
description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Previously updated : 03/15/2023 Last updated : 03/30/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Locati
#### [Windows Hybrid Worker](#tab/win-hrw)
-1. In the Azure portal, go to your Automation account.
+1. Open PowerShell session in Administrator mode and run the following command:
-1. Under **Account Settings**, select **Keys** and note the values for **URL** and **Primary Access Key**.
+ ```powershell-interactive
+ Remove-Item -Path "HKLM:\SOFTWARE\Microsoft\HybridRunbookWorker\<AutomationAccountID>\<HybridWorkerGroupName>" -Force -Verbose
+ ```
+1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers**.
+1. Select the checkbox next to the machine(s) you want to delete from the hybrid worker group.
+1. Select **Delete** to remove the agent-based Windows Hybrid Worker.
-1. Open a PowerShell session in Administrator mode and run the following command with your URL and primary access key values. Use the `Verbose` parameter for a detailed log of the removal process. To remove stale machines from your Hybrid Worker group, use the optional `machineName` parameter.
-
-```powershell-interactive
-Remove-HybridRunbookWorker -Url <URL> -Key <primaryAccessKey> -MachineName <computerName>
-```
-> [!NOTE]
-> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
-> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
+ > [!NOTE]
+ > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+ > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
#### [Linux Hybrid Worker](#tab/lin-hrw)
-You can use the command `ls /var/opt/microsoft/omsagent` on the Hybrid Runbook Worker to get the workspace ID. A folder is created that is named with the workspace ID.
+Run the following commands on agent-based Linux Hybrid Worker:
-```bash
-sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessKey>" --groupname="Example" --workspaceid="<workspaceId>"
-```
+1. ```python
+ sudo bash
+ ```
-> [!NOTE]
-> - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br>
-> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
-> - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
+1. ```python
+ rm -r /home/nxautomation
+ ```
+1. Under **Process Automation**, select **Hybrid worker groups** and then your hybrid worker group to go to the **Hybrid Worker Group** page.
+1. Under **Hybrid worker group**, select **Hybrid Workers**.
+1. Select the checkbox next to the machine(s) you want to delete from the hybrid worker group.
+1. Select **Delete** to remove the agent-based Linux Hybrid Worker.
++
+ > [!NOTE]
+ > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role. </br>
+ > - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker.
+ > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
As outlined in [Connectivity modes and requirements](./connectivity.md), you can
After you've installed the Azure Arc data controller, you can create and access data services such as Azure Arc-enabled SQL Managed Instance or Azure Arc-enabled PostgreSQL server.
+## Known limitations
+Currently, only one Azure Arc data controller per Kubernetes cluster is supported. However, you can create multiple Arc data services, such as Arc-enabled SQL managed instances and Arc-enabled PostgreSQL servers, that are managed by the same Azure Arc data controller.
+ ## Next steps You have several additional options for creating the Azure Arc data controller:
azure-arc Conceptual Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-workload-management.md
Title: "Workload management in a multi-cluster environment with GitOps" description: "This article provides a conceptual overview of the workload management in a multi-cluster environment with GitOps." Previously updated : 03/13/2023 Last updated : 03/29/2023
# Workload management in a multi-cluster environment with GitOps
-Developing modern cloud-native applications often includes building, deploying, configuring, and promoting workloads across a fleet of Kubernetes clusters. With the increasing diversity of Kubernetes clusters in the fleet, and the variety of applications and services, the process can become complex and unscalable. Enterprise organizations can be more successful in these efforts by having a well defined structure that organizes people and their activities, and by using automated tools.
+Developing modern cloud-native applications often includes building, deploying, configuring, and promoting workloads across a group of Kubernetes clusters. With the increasing diversity of Kubernetes cluster types, and the variety of applications and services, the process can become complex and unscalable. Enterprise organizations can be more successful in these efforts by having a well defined structure that organizes people and their activities, and by using automated tools.
This article walks you through a typical business scenario, outlining the involved personas and major challenges that organizations often face while managing cloud-native workloads in a multi-cluster environment. It also suggests an architectural pattern that can make this complex process simpler, observable, and more scalable. ## Scenario overview
-This article describes an organization that develops cloud-native applications. Any application needs a compute resource to work on. In the cloud-native world, this compute resource is a Kubernetes cluster. An organization may have a single cluster or, more commonly, multiple clusters. So the organization must decide which applications should work on which clusters. In other words, they must schedule the applications across clusters. The result of this decision, or scheduling, is a model of the desired state of their cluster fleet. Having that in place, they need somehow to deliver applications to the assigned clusters so that they can turn the desired state into the reality, or, in other words, reconcile it.
+This article describes an organization that develops cloud-native applications. Any application needs a compute resource to work on. In the cloud-native world, this compute resource is a Kubernetes cluster. An organization may have a single cluster or, more commonly, multiple clusters. So the organization must decide which applications should work on which clusters. In other words, they must schedule the applications across clusters. The result of this decision, or scheduling, is a model of the desired state of the clusters in their environment. Having that in place, they need somehow to deliver applications to the assigned clusters so that they can turn the desired state into the reality, or, in other words, reconcile it.
Every application goes through a software development lifecycle that promotes it to the production environment. For example, an application is built, deployed to Dev environment, tested and promoted to Stage environment, tested, and finally delivered to production. For a cloud-native application, the application requires and targets different Kubernetes cluster resources throughout its lifecycle. In addition, applications normally require clusters to provide some platform services, such as Prometheus and Fluentbit, and infrastructure configurations, such as networking policy.
-Depending on the application, there may be a great diversity of cluster types to which the application is deployed. The same application with different configurations could be hosted on a managed cluster in the cloud, on a connected cluster in an on-premises environment, on a fleet of clusters on semi-connected edge devices on factory lines or military drones, and on an air-gapped cluster on a starship. Another complexity is that clusters in early lifecycle stages such as Dev and QA are normally managed by the developer, while reconciliation to actual production clusters may be managed by the organization's customers. In the latter case, the developer may be responsible only for promoting and scheduling the application across different rings.
+Depending on the application, there may be a great diversity of cluster types to which the application is deployed. The same application with different configurations could be hosted on a managed cluster in the cloud, on a connected cluster in an on-premises environment, on a group of clusters on semi-connected edge devices on factory lines or military drones, and on an air-gapped cluster on a starship. Another complexity is that clusters in early lifecycle stages such as Dev and QA are normally managed by the developer, while reconciliation to actual production clusters may be managed by the organization's customers. In the latter case, the developer may be responsible only for promoting and scheduling the application across different rings.
## Challenges at scale
In a small organization with a single application and only a few operations, mos
The following capabilities are required to perform this type of workload management at scale in a multi-cluster environment: - Separation of concerns on scheduling and reconciling-- Promotion of the fleet state through a chain of environments
+- Promotion of the multi-cluster state through a chain of environments
- Sophisticated, extensible and replaceable scheduler - Flexibility to use different reconcilers for different cluster types depending on their nature and connectivity
Before we describe the scenario, let's clarify which personas are involved, what
### Platform team
-The platform team is responsible for managing the fleet of clusters that hosts applications produced by application teams.
+The platform team is responsible for managing the clusters that host applications produced by application teams.
Key responsibilities of the platform team are: * Define staging environments (Dev, QA, UAT, Prod).
-* Define cluster types in the fleet and their distribution across environments.
+* Define cluster types and their distribution across environments.
* Provision new clusters.
-* Manage infrastructure configurations across the fleet.
+* Manage infrastructure configurations across the clusters.
* Maintain platform services used by applications. * Schedule applications and platform services on the clusters.
Key responsibilities of the platform team are:
The application team is responsible for the software development lifecycle (SDLC) of their applications. They provide Kubernetes manifests that describe how to deploy the application to different targets. They're responsible for owning CI/CD pipelines that create container images and Kubernetes manifests and promote deployment artifacts across environment stages.
-Typically, the application team has no knowledge of the clusters that they are deploying to. They aren't aware of the structure of the fleet, global configurations, or tasks performed by other teams. The application team primarily understands the success of their application rollout as defined by the success of the pipeline stages.
+Typically, the application team has no knowledge of the clusters that they are deploying to. They aren't aware of the structure of the multi-cluster environment, global configurations, or tasks performed by other teams. The application team primarily understands the success of their application rollout as defined by the success of the pipeline stages.
Key responsibilities of the application team are:
Let's have a look at the high level solution architecture and understand its pri
### Control plane
-The platform team models the fleet in the control plane. It's designed to be human-oriented and easy to understand, update, and review. The control plane operates with abstractions such as Cluster Types, Environments, Workloads, Scheduling Policies, Configs and Templates. These abstractions are handled by an automated process that assigns deployment targets and configuration values to the cluster types, then saves the result to the platform GitOps repository. Although the entire fleet may consist of thousands of physical clusters, the platform repository operates at a higher level, grouping the clusters into cluster types.
+The platform team models the multi-cluster environment in the control plane. It's designed to be human-oriented and easy to understand, update, and review. The control plane operates with abstractions such as Cluster Types, Environments, Workloads, Scheduling Policies, Configs and Templates. These abstractions are handled by an automated process that assigns deployment targets and configuration values to the cluster types, then saves the result to the platform GitOps repository. Although there may be thousands of physical clusters, the platform repository operates at a higher level, grouping the clusters into cluster types.
The main requirement for the control plane storage is to provide a reliable and secure transaction processing functionality, rather than being hit with complex queries against a large amount of data. Various technologies may be used to store the control plane data.
Every cluster type can use a different reconciler (such as Flux, ArgoCD, Zarf, R
### Platform services
-Platform services are workloads (such as Prometheus, NGINX, Fluentbit, and so on) maintained by the platform team. Just like any workloads, they have their source repositories and manifests storage. The source repositories may contain pointers to external Helm charts. CI/CD pipelines pull the charts with containers and perform necessary security scans before submitting them to the manifests storage, from where they're reconciled to the clusters in the fleet.
+Platform services are workloads (such as Prometheus, NGINX, Fluentbit, and so on) maintained by the platform team. Just like any workloads, they have their source repositories and manifests storage. The source repositories may contain pointers to external Helm charts. CI/CD pipelines pull the charts with containers and perform necessary security scans before submitting them to the manifests storage, from where they're reconciled to the clusters.
### Deployment Observability Hub
-Deployment Observability Hub is a central storage that is easy to query with complex queries against a large amount of data. It contains deployment data with historical information on workload versions and their deployment state across clusters in the fleet. Clusters register themselves in the storage and update their compliance status with the GitOps repositories. Clusters operate at the level of Git commits only. High-level information, such as application versions, environments, and cluster type data, is transferred to the central storage from the GitOps repositories. This high-level information gets correlated in the central storage with the commit compliance data sent from the clusters.
+Deployment Observability Hub is a central storage that is easy to query with complex queries against a large amount of data. It contains deployment data with historical information on workload versions and their deployment state across clusters. Clusters register themselves in the storage and update their compliance status with the GitOps repositories. Clusters operate at the level of Git commits only. High-level information, such as application versions, environments, and cluster type data, is transferred to the central storage from the GitOps repositories. This high-level information gets correlated in the central storage with the commit compliance data sent from the clusters.
## Next steps
-* Explore a [sample implementation of workload management in a multi-cluster environment with GitOps](https://github.com/microsoft/kalypso).
-* Try our [Tutorial: Workload Management in Multi-cluster environment with GitOps](tutorial-workload-management.md) to walk through the implementation.
+* Walk through a sample implementation to explore [workload management in a multi-cluster environment with GitOps](workload-management.md).
+* Explore a [multi-cluster workload management sample repository](https://github.com/microsoft/kalypso).
azure-arc Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/workload-management.md
+
+ Title: 'Explore workload management in a multi-cluster environment with GitOps'
+description: Explore typical use-cases that Platform and Application teams face on a daily basis working with Kubernetes workloads in a multi-cluster environment.
+keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, ci/cd, devops"
+++ Last updated : 03/29/2023++
+# Explore workload management in a multi-cluster environment with GitOps
+
+Enterprise organizations, developing cloud native applications, face challenges to deploy, configure and promote a great variety of applications and services across multiple Kubernetes clusters at scale. This environment may include Azure Kubernetes Service (AKS) clusters, clusters running on other public cloud providers, or clusters in on-premises data centers that are connected to Azure through the Azure Arc. Refer to the [conceptual article](conceptual-workload-management.md) exploring the business process, challenges and solution architecture.
+
+This article walks you through an example scenario of the workload deployment and configuration in a multi-cluster Kubernetes environment. First, you deploy a sample infrastructure with a few GitHub repositories and AKS clusters. Next, you work through a set of use cases where you act as different personas working in the same environment: the Platform Team and the Application Team.
+
+## Prerequisites
+
+In order to successfully deploy the sample, you need:
+
+- An Azure subscription. If you don't already have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli)
+- [GitHub CLI](https://cli.github.com)
+- [Helm](https://helm.sh/docs/helm/helm_install/)
+- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
+- [jq](https://stedolan.github.io/jq/download/)
+- GitHub token with the following scopes: `repo`, `workflow`, `write:packages`, `delete:packages`, `read:org`, `delete_repo`.
+
+## 1 - Deploy the sample
+
+To deploy the sample, run the following script:
+
+```bash
+mkdir kalypso && cd kalypso
+curl -fsSL -o deploy.sh https://raw.githubusercontent.com/microsoft/kalypso/main/deploy/deploy.sh
+chmod 700 deploy.sh
+./deploy.sh -c -p <prefix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+```
+
+This script may take 10-15 minutes to complete. After it's done, it reports the execution result in the output like this:
+
+```output
+Deployment is complete!
+
+Created repositories:
+ - https://github.com/eedorenko/kalypso-control-plane
+ - https://github.com/eedorenko/kalypso-gitops
+ - https://github.com/eedorenko/kalypso-app-src
+ - https://github.com/eedorenko/kalypso-app-gitops
+
+Created AKS clusters in kalypso-rg resource group:
+ - control-plane
+ - drone (Flux based workload cluster)
+ - large (ArgoCD based workload cluster)
+
+```
+
+> [!NOTE]
+> If something goes wrong with the deployment, you can delete the created resources with the following command:
+>
+> ```bash
+> ./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+> ```
+
+### Sample overview
+
+This deployment script created an infrastructure, shown in the following diagram:
++
+There are a few Platform Team repositories:
+
+- [Control Plane](https://github.com/microsoft/kalypso-control-plane): Contains a platform model defined with high level abstractions such as environments, cluster types, applications and services, mapping rules and configurations, and promotion workflows.
+- [Platform GitOps](https://github.com/microsoft/kalypso-gitops): Contains final manifests that represent the topology of the multi-cluster environment, such as which cluster types are available in each environment, what workloads are scheduled on them, and what platform configuration values are set.
+- [Services Source](https://github.com/microsoft/kalypso-svc-src): Contains high-level manifest templates of sample dial-tone platform services.
+- [Services GitOps](https://github.com/microsoft/kalypso-svc-gitops): Contains final manifests of sample dial-tone platform services to be deployed across the clusters.
+
+The infrastructure also includes a couple of the Application Team repositories:
+
+- [Application Source](https://github.com/microsoft/kalypso-app-src): Contains a sample application source code, including Docker files, manifest templates and CI/CD workflows.
+- [Application GitOps](https://github.com/microsoft/kalypso-app-gitops): Contains final sample application manifests to be deployed to the deployment targets.
+
+The script created the following Azure Kubernetes Service (AKS) clusters:
+
+- `control-plane` - This cluster is a management cluster that doesn't run any workloads. The `control-plane` cluster hosts [Kalypso Scheduler](https://github.com/microsoft/kalypso-scheduler) operator that transforms high level abstractions from the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repository to the raw Kubernetes manifests in the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository.
+- `drone` - A sample workload cluster. This cluster has the [GitOps extension](conceptual-gitops-flux2.md) installed and it uses `Flux` to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository. For this sample, the `drone` cluster can represent an Azure Arc-enabled cluster or an AKS cluster with the Flux/GitOps extension.
+- `large` - A sample workload cluster. This cluster has `ArgoCD` installed on it to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository.
+
+### Explore Control Plane
+
+The `control plane` repository contains three branches: `main`, `dev` and `stage`. The `dev` and `stage` branches contain configurations that are specific for `Dev` and `Stage` environments. On the other hand, the `main` branch doesn't represent any specific environment. The content of the `main` branch is common and used by all environments. Any change to the `main` branch is a subject to be promoted across environments. For example, a new application or a new template can be promoted to the `Stage` environment only after successful testing on the `Dev` environment.
+
+The `main` branch:
+
+|Folder|Description|
+||--|
+|.github/workflows| Contains GitHub workflows that implement the promotional flow.|
+|.environments| Contains a list of environments with pointers to the branches with the environment configurations.|
+|templates| Contains manifest templates for various reconcilers (for example, Flux and ArgoCD) and a template for the workload namespace.|
+|workloads| Contains a list of onboarded applications and services with pointers to the corresponding GitOps repositories.|
+
+The `dev` and `stage` branches:
+
+|Item|Description|
+|-|--|
+|cluster-types| Contains a list of available cluster types in the environment. The cluster types are grouped in custom subfolders. Each cluster type is marked with a set of labels. It specifies a reconciler type that it uses to fetch the manifests from GitOps repositories. The subfolders also contain a number of config maps with the platform configuration values available on the cluster types.|
+|configs/dev-config.yaml| Contains config maps with the platform configuration values applicable for all cluster types in the environment.|
+|scheduling| Contains scheduling policies that map workload deployment targets to the cluster types in the environment.|
+|base-repo.yaml| A pointer to the place in the `Control Plane` repository (`main`) from where the scheduler should take templates and workload registrations.|
+|gitops-repo.yaml| A pointer to the place in the `Platform GitOps` repository to where the scheduler should PR generated manifests.|
+
+> [!TIP]
+> The folder structure in the `Control Plane` repository doesn't really matter. This example provides one way of organizing files in the repository, but feel free to do it in your own preferred way. The scheduler is interested in the content of the files, rather than where the files are located.
+
+## 2 - Platform Team: Onboard a new application
+
+The Application Team runs their software development lifecycle. They build their application and promote it across environments. They're not aware of what cluster types are available and where their application will be deployed. But they do know that they want to deploy their application in `Dev` environment for functional and performance testing and in `Stage` environment for UAT testing.
+
+The Application Team describes this intention in the [workload](https://github.com/microsoft/kalypso-app-src/blob/main/workload/workload.yaml) file in the [Application Source](https://github.com/microsoft/kalypso-app-src) repository:
+
+```yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: Workload
+metadata:
+ name: hello-world-app
+ labels:
+ type: application
+ family: force
+spec:
+ deploymentTargets:
+ - name: functional-test
+ labels:
+ purpose: functional-test
+ edge: "true"
+ environment: dev
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: dev
+ path: ./functional-test
+ - name: performance-test
+ labels:
+ purpose: performance-test
+ edge: "false"
+ environment: dev
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: dev
+ path: ./performance-test
+ - name: uat-test
+ labels:
+ purpose: uat-test
+ environment: stage
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: stage
+ path: ./uat-test
+```
+
+This file contains a list of three deployment targets. These targets are marked with custom labels and point to the folders in [Application GitOps](https://github.com/microsoft/kalypso-app-gitops) repository where the Application Team generates application manifests for each deployment target.
+
+With this file, Application Team requests Kubernetes compute resources from the Platform Team. In response, the Platform Team must register the application in the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repo.
+
+To register the application, open a terminal and use the following script:
+
+```bash
+export org=<github org>
+export prefix=<prefix>
+
+# clone the control-plane repo
+git clone https://github.com/$org/$prefix-control-plane control-plane
+cd control-plane
+
+# create workload registration file
+
+cat <<EOF >workloads/hello-world-app.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: WorkloadRegistration
+metadata:
+ name: hello-world-app
+ labels:
+ type: application
+spec:
+ workload:
+ repo: https://github.com/$org/$prefix-app-src
+ branch: main
+ path: workload/
+ workspace: kaizen-app-team
+EOF
+
+git add .
+git commit -m 'workload registration'
+git push
+```
+
+> [!NOTE]
+> For simplicity, this example pushes changes directly to `main`. In practice, you'd create a pull request to submit the changes.
+
+With that in place, the application is onboarded in the control plane. But the control plane still doesn't know how to map the application deployment targets to all of the cluster types.
+
+### Define application scheduling policy on Dev
+
+The Platform Team must define how the application deployment targets will be scheduled on cluster types in the `Dev` environment. To do this, submit scheduling policies for the `functional-test` and `performance-test` deployment targets with the following script:
+
+```bash
+# Switch to dev branch (representing Dev environemnt) in the control-plane folder
+git checkout dev
+mkdir -p scheduling/kaizen
+
+# Create a scheduling policy for the functional-test deployment target
+cat <<EOF >scheduling/kaizen/functional-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: functional-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: functional-test
+ edge: "true"
+ clusterTypeSelector:
+ labelSelector:
+ matchLabels:
+ restricted: "true"
+ edge: "true"
+EOF
+
+# Create a scheduling policy for the performance-test deployment target
+cat <<EOF >scheduling/kaizen/performance-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: performance-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: performance-test
+ edge: "false"
+ clusterTypeSelector:
+ labelSelector:
+ matchLabels:
+ size: large
+EOF
+
+git add .
+git commit -m 'application scheduling policies'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The first policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: functional-test` and `edge: "true"` should be scheduled on all environment cluster types that are marked with label `restricted: "true"`. You can treat a workspace as a group of applications produced by an application team.
+
+The second policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: performance-test` and `edge: "false"` should be scheduled on all environment cluster types that are marked with label `size: "large"`.
+
+This push to the `dev` branch triggers the scheduling process and creates a PR to the `dev` branch in the `Platform GitOps` repository:
++
+Besides `Promoted_Commit_id`, which is just tracking information for the promotion CD flow, the PR contains assignment manifests. The `functional-test` deployment target is assigned to the `drone` cluster type, and the `performance-test` deployment target is assigned to the `large` cluster type. Those manifests will land in `drone` and `large` folders that contain all assignments to these cluster types in the `Dev` environment.
+
+The `Dev` environment also includes `command-center` and `small` cluster types:
+
+ :::image type="content" source="media/workload-management/dev-cluster-types.png" alt-text="Screenshot showing cluster types in the Dev environment.":::
+
+However, only the `drone` and `large` cluster types were selected by the scheduling policies that you defined.
+
+### Understand deployment target assignment manifests
+
+Before you continue, take a closer look at the generated assignment manifests for the `functional-test` deployment target. There are `namespace.yaml`, `config.yaml` and `reconciler.yaml` manifest files.
+
+`namespace.yaml` defines a namespace that will be created on any `drone` cluster where the `hello-world` application runs.
+
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ labels:
+ deploymentTarget: hello-world-app-functional-test
+ environment: dev
+ someLabel: some-value
+ workload: hello-world-app
+ workspace: kaizen-app-team
+ name: dev-kaizen-app-team-hello-world-app-functional-test
+```
+
+`config.yaml` contains all platform configuration values available on any `drone` cluster that the application can use in the `Dev` environment.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: platform-config
+ namespace: dev-kaizen-app-team-hello-world-app-functional-test
+data:
+ CLUSTER_NAME: Drone
+ DATABASE_URL: mysql://restricted-host:3306/mysqlrty123
+ ENVIRONMENT: Dev
+ REGION: East US
+ SOME_COMMON_ENVIRONMENT_VARIABLE: "false"
+```
+
+`reconciler.yaml` contains Flux resources that a `drone` cluster uses to fetch application manifests, prepared by the Application Team for the `functional-test` deployment target.
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta2
+kind: GitRepository
+metadata:
+ name: hello-world-app-functional-test
+ namespace: flux-system
+spec:
+ interval: 30s
+ ref:
+ branch: dev
+ secretRef:
+ name: repo-secret
+ url: https://github.com/<github org>/<prefix>-app-gitops
+
+apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
+kind: Kustomization
+metadata:
+ name: hello-world-app-functional-test
+ namespace: flux-system
+spec:
+ interval: 30s
+ path: ./functional-test
+ prune: true
+ sourceRef:
+ kind: GitRepository
+ name: hello-world-app-functional-test
+ targetNamespace: dev-kaizen-app-team-hello-world-app-functional-test
+```
+
+> [!NOTE]
+> The `control plane` defines that the `drone` cluster type uses `Flux` to reconcile manifests from the application GitOps repositories. The `large` cluster type, on the other hand, reconciles manifests with `ArgoCD`. Therefore `reconciler.yaml` for the `performance-test` deployment target will look differently and contain `ArgoCD` resources.
+
+### Promote application to Stage
+
+Once you approve and merge the PR to the `Platform GitOps` repository, the `drone` and `large` AKS clusters that represent corresponding cluster types start fetching the assignment manifests. The `drone` cluster has [GitOps extension](conceptual-gitops-flux2.md) installed, pointing to the `Platform GitOps` repository. It reports its `compliance` status to Azure Resource Graph:
++
+The PR merging event starts a GitHub workflow `checkpromote` in the `control plane` repository. This workflow waits until all clusters with the [GitOps extension](conceptual-gitops-flux2.md) installed that are looking at the `dev` branch in the `Platform GitOps` repository are compliant with the PR commit. In this example, the only such cluster is `drone`.
++
+Once the `checkpromote` is successful, it starts the `cd` workflow that promotes the change (application registration) to the `Stage` environment. For better visibility, it also updates the git commit status in the `control plane` repository:
++
+> [!NOTE]
+> If the `drone` cluster fails to reconcile the assignment manifests for any reason, the promotion flow will fail. The commit status will be marked as failed, and the application registration will not be promoted to the `Stage` environment.
+
+Next, configure a scheduling policy for the `uat-test` deployment target in the stage environment:
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+mkdir -p scheduling/kaizen
+
+# Create a scheduling policy for the uat-test deployment target
+cat <<EOF >scheduling/kaizen/uat-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: uat-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: uat-test
+ clusterTypeSelector:
+ labelSelector: {}
+EOF
+
+git add .
+git commit -m 'application scheduling policies'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The policy states that all deployment targets from the `kaizen-app-team` workspace marked with labels `purpose: uat-test` should be scheduled on all cluster types defined in the environment.
+
+Pushing this policy to the `stage` branch triggers the scheduling process, which creates a PR with the assignment manifests to the `Platform GitOps` repository, similar to those for the `Dev` environment.
+
+As in the case with the `Dev` environment, after reviewing and merging the PR to the `Platform GitOps` repository, the `checkpromote` workflow in the `control plane` repository waits until clusters with the [GitOps extension](conceptual-gitops-flux2.md) (`drone`) reconcile the assignment manifests.
+
+ :::image type="content" source="media/workload-management/check-promote-to-stage.png" alt-text="Screenshot showing promotion to stage.":::
+
+On successful execution, the commit status is updated.
++
+## 3 - Application Dev Team: Build and deploy application
+
+The Application Team regularly submits pull requests to the `main` branch in the `Application Source` repository. Once a PR is merged to `main`, it starts a CI/CD workflow. Here, the workflow will be started manually.
+
+ Go to the `Application Source` repository in GitHub. On the `Actions` tab, select `Run workflow`.
++
+The workflow performs the following actions:
+
+- Builds the application Docker image and pushes it to the GitHub repository package.
+- Generates manifests for the `functional-test` and `performance-test` deployment targets. It uses configuration values from the `dev-configs` branch. The generated manifests are added to a pull request and auto-merged in the `dev` branch.
+- Generates manifests for the `uat-test` deployment target. It uses configuration values from the `stage-configs` branch.
++
+The generated manifests are added to a pull request to the `stage` branch waiting for approval:
++
+To test the application manually on the `Dev` environment before approving the PR to the `Stage` environment, first verify how the `functional-test` application instance works on the `drone` cluster:
+
+```bash
+kubectl port-forward svc/hello-world-service -n dev-kaizen-app-team-hello-world-app-functional-test 9090:9090 --context=drone
+
+# output:
+# Forwarding from 127.0.0.1:9090 -> 9090
+# Forwarding from [::1]:9090 -> 9090
+
+```
+
+While this command is running, open `localhost:9090` in your browser. You'll see the following greeting page:
++
+The next step is to check how the `performance-test` instance works on the `large` cluster:
+
+```bash
+kubectl port-forward svc/hello-world-service -n dev-kaizen-app-team-hello-world-app-performance-test 8080:8080 --context=large
+
+# output:
+# Forwarding from 127.0.0.1:8080 -> 8080
+# Forwarding from [::1]:8080 -> 8080
+
+```
+
+This time, use `8080` port and open `localhost:8080` in your browser.
+
+Once you're satisfied with the `Dev` environment, approve and merge the PR to the `Stage` environment. After that, test the `uat-test` application instance in the `Stage` environment on both clusters.
+
+Run the following command for the `drone` cluster and open `localhost:8001` in your browser:
+
+```bash
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8001:8000 --context=drone
+```
+
+Run the following command for the `large` cluster and open `localhost:8002` in your browser:
+
+ ```bash
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8002:8000 --context=large
+```
+
+> [!NOTE]
+> It may take up to three minutes to reconcile the changes from the application GitOps repository on the `large` cluster.
+
+The application instance on the `large` cluster shows the following greeting page:
+
+ :::image type="content" source="media/workload-management/stage-greeting-page.png" alt-text="Screenshot showing the greeting page on stage.":::
+
+## 4 - Platform Team: Provide platform configurations
+
+Applications in the clusters grab the data from the very same database in both `Dev` and `Stage` environments. Let's change it and configure `west-us` clusters to provide a different database url for the applications working in the `Stage` environment:
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+
+# Update a config map with the configurations for west-us clusters
+cat <<EOF >cluster-types/west-us/west-us-config.yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: west-us-config
+ labels:
+ platform-config: "true"
+ region: west-us
+data:
+ REGION: West US
+ DATABASE_URL: mysql://west-stage:8806/mysql2
+EOF
+
+git add .
+git commit -m 'database url configuration'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The scheduler scans all config maps in the environment and collects values for each cluster type based on label matching. Then, it puts a `platform-config` config map in every deployment target folder in the `Platform GitOps` repository. The `platform-config` config map contains all of the platform configuration values that the workload can use on this cluster type in this environment.
+
+In a few seconds, a new PR to the `stage` branch in the `Platform GitOps` repository appears:
++
+Approve the PR and merge it.
+
+The `large` cluster is handled by ArgoCD, which, by default, is configured to reconcile every three minutes. This cluster doesn't report its compliance state to Azure like the clusters such as `drone` that have the [GitOps extension](conceptual-gitops-flux2.md). However, you can still monitor the reconciliation state on the cluster with ArgoCD UI.
+
+To access the ArgoCD UI on the `large` cluster, run the following command:
+
+```bash
+# Get ArgoCD username and password
+echo "ArgoCD username: admin, password: $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" --context large| base64 -d)"
+# output:
+# ArgoCD username: admin, password: eCllTELZdIZfApPL
+
+kubectl port-forward svc/argocd-server 8080:80 -n argocd --context large
+```
+
+Next, open `localhost:8080` in your browser and provide the username and password printed by the script. You'll see a web page similar to this one:
+
+ :::image type="content" source="media/workload-management/argocd-ui.png" alt-text="Screenshot showing the Argo CD user interface web page." lightbox="media/workload-management/argocd-ui.png":::
+
+Select the `stage` tile to see more details on the reconciliation state from the `stage` branch to this cluster. You can select the `SYNC` buttons to force the reconciliation and speed up the process.
+
+Once the new configuration has arrived to the cluster, check the `uat-test` application instance at `localhost:8002` after
+running the following commands:
+
+```bash
+kubectl rollout restart deployment hello-world-deployment -n stage-kaizen-app-team-hello-world-app-uat-test --context=large
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8002:8000 --context=large
+```
+
+You'll see the updated database url:
++
+## 5 - Platform Team: Add cluster type to environment
+
+Currently, only `drone` and `large` cluster types are included in the `Stage` environment. Let's include the `small` cluster type to `Stage` as well. Even though there's no physical cluster representing this cluster type, you can see how the scheduler reacts to this change.
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+
+# Add "small" cluster type in west-us region
+mkdir -p cluster-types/west-us/small
+cat <<EOF >cluster-types/west-us/small/small-cluster-type.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: ClusterType
+metadata:
+ name: small
+ labels:
+ region: west-us
+ size: small
+spec:
+ reconciler: argocd
+ namespaceService: default
+EOF
+
+git add .
+git commit -m 'add new cluster type'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+In a few seconds, the scheduler submits a PR to the `Platform GitOps` repository. According to the `uat-test-policy` that you created, it assigns the `uat-test` deployment target to the new cluster type, as it's supposed to work on all available cluster types in the environment.
++
+## Clean up resources
+When no longer needed, delete the resources that you created. To do so, run the following command:
+
+```bash
+# In kalypso folder
+./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+```
+
+## Next steps
+
+You have performed tasks for a few common workload management scenarios in a multi-cluster Kubernetes environment. There are many other scenarios you may want to explore. Continue to use the sample and see how you can implement use cases that are most common in your daily activities.
+
+To understand the underlying concepts and mechanics deeper, refer to the following resources:
+
+> [!div class="nextstepaction"]
+> - [Concept: Workload Management in Multi-cluster environment with GitOps](conceptual-workload-management.md)
+> - [Sample implementation: Workload Management in Multi-cluster environment with GitOps](https://github.com/microsoft/kalypso)
+
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
If you're only planning to use Private Links to support a few machines or server
#### Linux
-1. Using an account with the **sudoers** privilege, run `sudo nano /etc/hosts` to open the hosts file.
+1. Open the `/etc/hosts` hosts file in a text editor.
1. Add the private endpoint IPs and hostnames as shown in the table from step 3 under [Manual DNS server configuration](#manual-dns-server-configuration). The hosts file asks for the IP address first followed by a space and then the hostname.
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
Title: How to configure Azure Functions with a virtual network description: Article that shows you how to perform certain virtual networking tasks for Azure Functions.- Previously updated : 03/04/2022+ Last updated : 03/24/2023 # How to configure Azure Functions with a virtual network
-This article shows you how to perform tasks related to configuring your function app to connect to and run on a virtual network. For an in-depth tutorial on how to secure your storage account, please refer to the [Connect to a Virtual Network tutorial](functions-create-vnet.md). To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
+This article shows you how to perform tasks related to configuring your function app to connect to and run on a virtual network. For an in-depth tutorial on how to secure your storage account, refer to the [Connect to a Virtual Network tutorial](functions-create-vnet.md). To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
## Restrict your storage account to a virtual network
-When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints. When configuring your storage account with private endpoints, public access to your storage account is not automatically disabled. In order to disable public access to your storage account, configure your storage firewall to allow access from only selected networks.
+When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can secure a new storage account behind a virtual network during account creation. At this time, you can't secure an existing storage account being used by your function app in the same way.
+> [!NOTE]
+> Securing your storage account is supported for all tiers in both Dedicated (App Service) and Elastic Premium plans. Consumption plans currently don't support virtual networks.
+### During function app creation
-> [!NOTE]
-> This feature currently works for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for Windows Elastic Premium plans. Consumption tier isn't supported.
+You can create a new function app along with a new storage account secured behind a virtual network. The following links show you how to create these resources by using either the Azure portal or by using deployment templates:
+
+# [Azure portal](#tab/portal)
+
+Complete the following tutorial to create a new function app a secured storage account: [Use private endpoints to integrate Azure Functions with a virtual network](functions-create-vnet.md).
+
+# [Deployment templates](#tab/templates)
+
+Use Bicep or Azure Resource Manager (ARM) [quickstart templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints) to create secured function app and storage account resources.
+++
+### Existing function app
+
+When you have an existing function app, you can't secure the storage account currently being used by the app. You must instead swap-out the existing storage account for a new, secured storage account.
+
+To secure the storage for an existing function app:
+
+1. Choose a function app with a storage account that doesn't have service endpoints or private endpoints enabled.
-To set up a function with a storage account restricted to a private network:
+1. [Enable virtual network integration](./functions-networking-options.md#enable-virtual-network-integration) for your function app.
-1. Create a function with a storage account that does not have service endpoints enabled.
+1. Create or configure a second storage account. This is going to be the secured storage account that your function app uses instead.
-1. Configure the function to connect to your virtual network.
+1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the new storage account.
-1. Create or configure a different storage account. This will be the storage account we secure with service endpoints and connect our function.
+1. Secure the new storage account in one of the following ways:
-1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the secured storage account.
+ * [Create a private endpoint](../storage/common/storage-private-endpoints.md#creating-a-private-endpoint). When using private endpoint connections, the storage account must have private endpoints for the `file` and `blob` subresources. For Durable Functions, you must also make `queue` and `table` subresources accessible through private endpoints.
-1. Enable service endpoints or private endpoint for the storage account.
- * If using private endpoint connections, the storage account will need a private endpoint for the `file` and `blob` sub-resources. If using certain capabilities like Durable Functions, you will also need `queue` and `table` accessible through a private endpoint connection.
- * If using service endpoints, enable the subnet dedicated to your function apps for storage accounts.
+ * [Enable a service endpoint from the virtual network](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). When using service endpoints, enable the subnet dedicated to your function apps for storage accounts on the firewall.
-1. Copy the file and blob content from the function app storage account to the secured storage account and file share.
+1. Copy the file and blob content from the current storage account used by the function app to the newly secured storage account and file share.
1. Copy the connection string for this storage account.
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md
Title: Bindings for Durable Functions - Azure description: How to use triggers and bindings for the Durable Functions extension for Azure Functions. Previously updated : 12/07/2022 Last updated : 03/22/2023
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Bindings for Durable Functions (Azure Functions) The [Durable Functions](durable-functions-overview.md) extension introduces three trigger bindings that control the execution of orchestrator, entity, and activity functions. It also introduces an output binding that acts as a client for the Durable Functions runtime.
+Make sure to choose your Durable Functions development language at the top of the article.
++
+> [!IMPORTANT]
+> This article supports both Python v1 and Python v2 programming models for Durable Functions.
+> The Python v2 programming model is currently in preview.
+
+## Python v2 programming model
+
+Durable Functions provides preview support of the new [Python v2 programming model](../functions-reference-python.md?pivots=python-mode-decorators). To use the v2 model, you must install the Durable Functions SDK, which is the PyPI package `azure-functions-durable`, version `1.2.2` or a later version. During the preview, you can provide feedback and suggestions in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python/issues).
+
+Using [Extension Bundles](../functions-bindings-register.md#extension-bundles) isn't currently supported for the v2 model with Durable Functions. You'll instead need to manage your extensions manually as follows:
+
+1. Remove the `extensionBundle` section of your `host.json` as described in [this Functions article](../functions-run-local.md#install-extensions).
+
+1. Run the `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` command on your terminal. This installs the Durable Functions extension for your app, which allows you to use the v2 model preview.
++ ## Orchestration trigger The orchestration trigger enables you to author [durable orchestrator functions](durable-functions-types-features-overview.md#orchestrator-functions). This trigger executes when a new orchestration instance is scheduled and when an existing orchestration instance receives an event. Examples of events that can trigger orchestrator functions include durable timer expirations, activity function responses, and events raised by external clients.
-When you author functions in .NET, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute. For Java, the `@DurableOrchestrationTrigger` annotation is used.
+When you author functions in .NET, the orchestration trigger is configured using the [OrchestrationTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.orchestrationtriggerattribute) .NET attribute.
+For Java, the `@DurableOrchestrationTrigger` annotation is used to configure the orchestration trigger.
+When you write orchestrator functions, the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
-When you write orchestrator functions in scripting languages, like JavaScript, Python, or PowerShell, the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "orchestration": "<Optional - name of the orchestration>",
+ "type": "orchestrationTrigger",
+ "direction": "in"
+}
+```
+
+* `orchestration` is the name of the orchestration that clients must use when they want to start new instances of this orchestrator function. This property is optional. If not specified, the name of the function is used.
+Azure Functions supports two programming models for Python. The way that you define an orchestration trigger depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define an orchestration trigger using the `orchestration_trigger` decorator directly in your Python function code.
+
+In the v2 model, the Durable Functions triggers and bindings are accessed from an instance of `DFApp`, which is a subclass of `FunctionApp` that additionally exports Durable Functions-specific decorators.
+
+# [v1](#tab/python-v1)
+When you write orchestrator functions in the Python v1 programming model, the orchestration trigger is defined by the following JSON object in the `bindings` array of the *function.json* file:
```json {
When you write orchestrator functions in scripting languages, like JavaScript, P
* `orchestration` is the name of the orchestration that clients must use when they want to start new instances of this orchestrator function. This property is optional. If not specified, the name of the function is used. +++ Internally, this trigger binding polls the configured durable store for new orchestration events, such as orchestration start events, durable timer expiration events, activity function response events, and external events raised by other functions. ### Trigger behavior
Here are some notes about the orchestration trigger:
> [!WARNING] > Orchestrator functions should never use any input or output bindings other than the orchestration trigger binding. Doing so has the potential to cause problems with the Durable Task extension because those bindings may not obey the single-threading and I/O rules. If you'd like to use other bindings, add them to an activity function called from your orchestrator function. For more information about coding constraints for orchestrator functions, see the [Orchestrator function code constraints](durable-functions-code-constraints.md) documentation. > [!WARNING]
-> JavaScript and Python orchestrator functions should never be declared `async`.
+> Orchestrator functions should never be declared `async`.
### Trigger usage
The orchestration trigger binding supports both inputs and outputs. Here are som
The following example code shows what the simplest "Hello World" orchestrator function might look like. Note that this example orchestrator doesn't actually schedule any tasks.
-# [C# (InProc)](#tab/csharp-inproc)
+The specific attribute used to define the trigger depends on whether you are running your C# functions [in-process](../functions-dotnet-class-library.md) or in an [isolated worker process](../dotnet-isolated-process-guide.md).
+
+# [In-process](#tab/in-process)
```csharp [FunctionName("HelloWorld")]
public static string Run([OrchestrationTrigger] IDurableOrchestrationContext con
> [!NOTE] > The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `DurableOrchestrationContext` instead of `IDurableOrchestrationContext`. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [Isolated process](#tab/isolated-process)
```csharp [Function("HelloWorld")]
public static string Run([OrchestrationTrigger] TaskOrchestrationContext context
> [!NOTE] > In both Durable functions in-proc and in .NET-isolated, the orchestration input can be extracted via `context.GetInput<T>()`. However, .NET-isolated also supports the input being supplied as a parameter, as shown above. The input binding will bind to the first parameter which has no binding attribute on it and is not a well-known type already covered by other input bindings (ie: `FunctionContext`).
-# [JavaScript](#tab/javascript)
++ ```javascript const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
> [!NOTE] > The `durable-functions` library takes care of calling the synchronous `context.done` method when the generator function exits.
+# [v2](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
-# [Python](#tab/python)
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+@myApp.orchestration_trigger(context_name="context")
+def my_orchestrator(context):
+ result = yield context.call_activity("Hello", "Tokyo")
+ return result
+```
+
+# [v1](#tab/python-v1)
```python import azure.durable_functions as df
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```-
-# [PowerShell](#tab/powershell)
```powershell param($Context)
param($Context)
$input = $Context.Input $input ```-
-# [Java](#tab/java)
```java @FunctionName("HelloWorldOrchestration")
public String helloWorldOrchestration(
return String.format("Hello %s!", ctx.getInput(String.class)); } ```- Most orchestrator functions call activity functions, so here is a "Hello World" example that demonstrates how to call an activity function:-
-# [C# (InProc)](#tab/csharp-inproc)
+# [In-process](#tab/in-process)
```csharp [FunctionName("HelloWorld")]
public static async Task<string> Run(
> [!NOTE] > The previous code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `DurableOrchestrationContext` instead of `IDurableOrchestrationContext`. For more information about the differences between versions, see the [Durable Functions versions](durable-functions-versions.md) article.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [Isolated process](#tab/isolated-process)
```csharp [Function("HelloWorld")]
public static async Task<string> Run(
} ```
-# [JavaScript](#tab/javascript)
++ ```javascript const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
return result; }); ```-
-# [Python](#tab/python)
-
-```python
-import azure.durable_functions as df
-
-def orchestrator_function(context: df.DurableOrchestrationContext):
- input = context.get_input()
- result = yield context.call_activity('SayHello', input['name'])
- return result
-
-main = df.Orchestrator.create(orchestrator_function)
-```
-
-# [PowerShell](#tab/powershell)
-
-```powershell
-param($Context)
-
-$name = $Context.Input.Name
-
-$output = Invoke-DurableActivity -FunctionName 'SayHello' -Input $name
-
-$output
-```
-
-# [Java](#tab/java)
```java @FunctionName("HelloWorld")
public String helloWorldOrchestration(
return result; } ```-- ## Activity trigger The activity trigger enables you to author functions that are called by orchestrator functions, known as [activity functions](durable-functions-types-features-overview.md#activity-functions).
-If you're authoring functions in .NET, the activity trigger is configured using the [ActivityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.activitytriggerattribute) .NET attribute. For Java, the `@DurableActivityTrigger` annotation is used.
+The activity trigger is configured using the [ActivityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.activitytriggerattribute) .NET attribute.
+The activity trigger is configured using the `@DurableActivityTrigger` annotation.
+The activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "activity": "<Optional - name of the activity>",
+ "type": "activityTrigger",
+ "direction": "in"
+}
+```
+
+* `activity` is the name of the activity. This value is the name that orchestrator functions use to invoke this activity function. This property is optional. If not specified, the name of the function is used.
+The way that you define an activity trigger depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+Using the `activity_trigger` decorator directly in your Python function code.
-If you're using JavaScript, Python, or PowerShell, the activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+# [v1](#tab/python-v1)
+The activity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using JavaScript, Python, or PowerShell, the activity trigger is defin
* `activity` is the name of the activity. This value is the name that orchestrator functions use to invoke this activity function. This property is optional. If not specified, the name of the function is used. ++ Internally, this trigger binding polls the configured durable store for new activity execution events. ### Trigger behavior
The activity trigger binding supports both inputs and outputs, just like the orc
### Trigger sample
-The following example code shows what a simple `SayHello` activity function might look like:
+The following example code shows what a simple `SayHello` activity function might look like.
-# [C# (InProc)](#tab/csharp-inproc)
+# [In-process](#tab/in-process)
```csharp [FunctionName("SayHello")]
public static string SayHello([ActivityTrigger] string name)
} ```
-# [C# (Isolated)](#tab/csharp-isolated)
+# [Isolated process](#tab/isolated-process)
In the .NET-isolated worker, only serializable types representing your input are supported for the `[ActivityTrigger]`.
public static string SayHello([ActivityTrigger] string name)
} ```
-# [JavaScript](#tab/javascript)
+ ```javascript module.exports = async function(context) { return `Hello ${context.bindings.name}!`;
module.exports = async function(context, name) {
}; ```
-# [Python](#tab/python)
+
+# [v2](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.activity_trigger(input_name="myInput")
+def my_activity(myInput: str):
+ return "Hello " + myInput
+```
+
+# [v1](#tab/python-v1)
```python def main(name: str) -> str: return f"Hello {name}!" ```
-# [PowerShell](#tab/powershell)
++ ```powershell param($name) "Hello $name!" ```-
-# [Java](#tab/java)
- ```java @FunctionName("SayHello") public String sayHello(@DurableActivityTrigger(name = "name") String name) { return String.format("Hello %s!", name); } ```-- ### Using input and output bindings
-You can use regular input and output bindings in addition to the activity trigger binding. For example, you can take the input to your activity binding, and send a message to an EventHub using the EventHub output binding:
+You can use regular input and output bindings in addition to the activity trigger binding.
+
+For example, you can take the input to your activity binding, and send a message to an EventHub using the EventHub output binding:
```json {
module.exports = async function (context) {
context.bindings.outputEventHubMessage = context.bindings.message; }; ``` ## Orchestration client
The orchestration client binding enables you to write functions that interact wi
* Send events to them while they're running. * Purge instance history.
-If you're using .NET, you can bind to the orchestration client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) attribute ([OrchestrationClientAttribute](/dotnet/api/microsoft.azure.webjobs.orchestrationclientattribute?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x). For Java, use the `@DurableClientInput` annotation.
+You can bind to the orchestration client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) attribute ([OrchestrationClientAttribute](/dotnet/api/microsoft.azure.webjobs.orchestrationclientattribute?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x).
+You can bind to the orchestration client by using the `@DurableClientInput` annotation.
+The durable client trigger is defined by the following JSON object in the `bindings` array of *function.json*:
-If you're using scripting languages, like JavaScript, Python, or PowerShell, the durable client trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "taskHub": "<Optional - name of the task hub>",
+ "connectionName": "<Optional - name of the connection string app setting>",
+ "type": "orchestrationClient",
+ "direction": "in"
+}
+```
+
+* `taskHub` - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from `host.json` is used. This value must match the value used by the target orchestrator functions.
+* `connectionName` - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target orchestrator functions. If not specified, the default storage account connection string for the function app is used.
+
+> [!NOTE]
+> In most cases, we recommend that you omit these properties and rely on the default behavior.
+The way that you define a durable client trigger depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+Using the `durable_client_input` decorator directly in your Python function code.
+
+# [v1](#tab/python-v1)
+The durable client trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using scripting languages, like JavaScript, Python, or PowerShell, the
> [!NOTE] > In most cases, we recommend that you omit these properties and rely on the default behavior. ++ ### Client usage
-In .NET functions, you typically bind to [IDurableClient](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableclient) ([DurableOrchestrationClient](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclient?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x), which gives you full access to all orchestration client APIs supported by Durable Functions. For Java, you bind to the `DurableClientContext` class. In other languages, you must use the language-specific SDK to get access to a client object.
+You typically bind to [IDurableClient](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.idurableclient) ([DurableOrchestrationClient](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclient?view=azure-dotnet-legacy&preserve-view=true) in Durable Functions v1.x), which gives you full access to all orchestration client APIs supported by Durable Functions.
+You typically bind to the `DurableClientContext` class.
+You must use the language-specific SDK to get access to a client object.
Here's an example queue-triggered function that starts a "HelloWorld" orchestration.
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [In-process](#tab/in-process)
```csharp [FunctionName("QueueStart")]
public static Task Run(
> [!NOTE] > The previous C# code is for Durable Functions 2.x. For Durable Functions 1.x, you must use `OrchestrationClient` attribute instead of the `DurableClient` attribute, and you must use the `DurableOrchestrationClient` parameter type instead of `IDurableOrchestrationClient`. For more information about the differences between versions, see the [Durable Functions Versions](durable-functions-versions.md) article.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [Isolated process](#tab/isolated-process)
```csharp [Function("QueueStart")]
public static Task Run(
} ```
-# [JavaScript](#tab/javascript)
++ **function.json** ```json
public static Task Run(
} ``` **index.js** ```javascript const df = require("durable-functions");
module.exports = async function (context) {
return instanceId = await client.startNew("HelloWorld", undefined, context.bindings.input); }; ```+
+**run.ps1**
+```powershell
+param([string] $input, $TriggerMetadata)
+
+$InstanceId = Start-DurableOrchestration -FunctionName $FunctionName -Input $input
+```
-# [Python](#tab/python)
+# [v2](#tab/python-v2)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+
+@myApp.route(route="orchestrators/{functionName}")
+@myApp.durable_client_input(client_name="client")
+async def durable_trigger(req: func.HttpRequest, client):
+ function_name = req.route_params.get('functionName')
+ instance_id = await client.start_new(function_name)
+ response = client.create_check_status_response(req, instance_id)
+ return response
+```
+
+# [v1](#tab/python-v1)
**`function.json`** ```json
async def main(msg: func.QueueMessage, starter: str) -> None:
payload = msg.get_body().decode('utf-8') instance_id = await client.start_new("HelloWorld", client_input=payload) ```+
-# [PowerShell](#tab/powershell)
-
-**function.json**
-```json
-{
- "bindings": [
- {
- "name": "input",
- "type": "queueTrigger",
- "queueName": "durable-function-trigger",
- "direction": "in"
- },
- {
- "name": "starter",
- "type": "durableClient",
- "direction": "in"
- }
- ]
-}
-```
-
-**run.ps1**
-```powershell
-param([string] $input, $TriggerMetadata)
-
-$InstanceId = Start-DurableOrchestration -FunctionName $FunctionName -Input $input
-```
-
-# [Java](#tab/java)
```java @FunctionName("QueueStart")
public void queueStart(
durableContext.getClient().scheduleNewOrchestrationInstance("HelloWorld", input); } ```--- More details on starting instances can be found in [Instance management](durable-functions-instance-management.md). ## Entity trigger
Entity triggers allow you to author [entity functions](durable-functions-entitie
Internally, this trigger binding polls the configured durable store for new entity operations that need to be executed.
-If you're authoring functions in .NET, the entity trigger is configured using the [EntityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.entitytriggerattribute) .NET attribute.
+The entity trigger is configured using the [EntityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.entitytriggerattribute) .NET attribute.
-If you're using JavaScript, Python, or PowerShell, the entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+> [!NOTE]
+> Entity triggers aren't yet supported for isolated worker process apps.
+The entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using JavaScript, Python, or PowerShell, the entity trigger is defined
} ```
+By default, the name of an entity is the name of the function.
> [!NOTE]
-> Entity triggers are not yet supported in Java or in the .NET-isolated worker.
+> Entity triggers aren't yet supported for Java.
+The way that you define a entity trigger depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+Using the `entity_trigger` decorator directly in your Python function code.
+
+# [v1](#tab/python-v1)
+The entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "entityName": "<Optional - name of the entity>",
+ "type": "entityTrigger",
+ "direction": "in"
+}
+```
By default, the name of an entity is the name of the function. ++ ### Trigger behavior Here are some notes about the entity trigger:
For more information and examples on defining and interacting with entity trigge
The entity client binding enables you to asynchronously trigger [entity functions](#entity-trigger). These functions are sometimes referred to as [client functions](durable-functions-types-features-overview.md#client-functions).
-If you're using .NET precompiled functions, you can bind to the entity client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) .NET attribute.
+You can bind to the entity client by using the [DurableClientAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.durableclientattribute) .NET attribute in .NET class library functions.
> [!NOTE] > The `[DurableClientAttribute]` can also be used to bind to the [orchestration client](#orchestration-client).
-If you're using scripting languages (like C# scripting, JavaScript, or Python) for development, the entity trigger is defined by the following JSON object in the `bindings` array of *function.json*:
+The entity client is defined by the following JSON object in the `bindings` array of *function.json*:
```json {
If you're using scripting languages (like C# scripting, JavaScript, or Python) f
} ```
+* `taskHub` - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from `host.json` is used. This value must match the value used by the target entity functions.
+* `connectionName` - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target entity functions. If not specified, the default storage account connection string for the function app is used.
+ > [!NOTE]
-> Entity clients are not yet supported in Java.
+> In most cases, we recommend that you omit the optional properties and rely on the default behavior.
+The way that you define a entity client depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+Using the `durable_client_input` decorator directly in your Python function code.
+
+# [v1](#tab/python-v1)
+The entity client is defined by the following JSON object in the `bindings` array of *function.json*:
+
+```json
+{
+ "name": "<Name of input parameter in function signature>",
+ "taskHub": "<Optional - name of the task hub>",
+ "connectionName": "<Optional - name of the connection string app setting>",
+ "type": "durableClient",
+ "direction": "in"
+}
+```
* `taskHub` - Used in scenarios where multiple function apps share the same storage account but need to be isolated from each other. If not specified, the default value from `host.json` is used. This value must match the value used by the target entity functions. * `connectionName` - The name of an app setting that contains a storage account connection string. The storage account represented by this connection string must be the same one used by the target entity functions. If not specified, the default storage account connection string for the function app is used.
If you're using scripting languages (like C# scripting, JavaScript, or Python) f
> [!NOTE] > In most cases, we recommend that you omit the optional properties and rely on the default behavior. +
+> [!NOTE]
+> Entity clients aren't yet supported for Java.
+ For more information and examples on interacting with entities as a client, see the [Durable Entities](durable-functions-entities.md#access-entities) documentation. <a name="host-json"></a>
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Title: Azure Cosmos DB input binding for Functions 2.x and higher description: Learn to use the Azure Cosmos DB input binding in Azure Functions. Previously updated : 03/04/2022 Last updated : 03/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
For information on setup and configuration details, see the [overview](./functio
> When the collection is [partitioned](../cosmos-db/partitioning-overview.md#logical-partitions), lookup operations must also specify the partition key value. >
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example Unless otherwise noted, examples in this article target version 3.x of the [Azure Cosmos DB extension](functions-bindings-cosmosdb-v2.md). For use with extension version 4.x, you need to replace the string `collection` in property and attribute names with `container`.
This section contains the following examples that read a single document by spec
<a id="queue-trigger-look-up-id-from-json-python"></a>
+The examples depend on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+ ### Queue trigger, look up ID from JSON
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function reads a single document and updates the document's text value.
+The following example shows an Azure Cosmos DB input binding. The function reads a single document and updates the document's text value.
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.queue_trigger(arg_name="msg",
+ queue_name="outqueue",
+ connection="AzureWebJobsStorage")
+@app.cosmos_db_input(arg_name="documents",
+ database_name="MyDatabase",
+ collection_name="MyCollection",
+ id="{msg.payload_property}",
+ partition_key="{msg.payload_property}",
+ connection_string_setting="MyAccount_COSMOSDB")
+@app.cosmos_db_output(arg_name="outputDocument",
+ database_name="MyDatabase",
+ collection_name="MyCollection",
+ connection_string_setting="MyAccount_COSMOSDB")
+def test_function(msg: func.QueueMessage,
+ inputDocument: func.DocumentList,
+ outputDocument: func.Out[func.Document]):
+ document = documents[id]
+ document["text"] = "This was updated!"
+ doc = inputDocument[0]
+ doc["text"] = "This was updated!"
+ outputDocument.set(doc)
+ print(f"Updated document.")
+```
+
+# [v1](#tab/python-v1)
Here's the binding data in the *function.json* file:
def main(queuemsg: func.QueueMessage, documents: func.DocumentList) -> func.Docu
return document ``` ++ <a id="http-trigger-look-up-id-from-query-string-python"></a> ### HTTP trigger, look up ID from query string
-The following example shows a [Python function](functions-reference-python.md) that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+The following example shows a function that retrieves a single document. The function is triggered by an HTTP request that uses a query string to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+
+# [v2](#tab/python-v2)
+
+No equivalent sample for v2 at this time.
+
+# [v1](#tab/python-v1)
+ Here's the *function.json* file:
def main(req: func.HttpRequest, todoitems: func.DocumentList) -> str:
return 'OK' ``` ++ <a id="http-trigger-look-up-id-from-route-data-python"></a> ### HTTP trigger, look up ID from route data
-The following example shows a [Python function](functions-reference-python.md) that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+The following example shows a function that retrieves a single document. The function is triggered by an HTTP request that uses route data to specify the ID and partition key value to look up. That ID and partition key value are used to retrieve a `ToDoItem` document from the specified database and collection.
+
+# [v2](#tab/python-v2)
+
+No equivalent sample for v2 at this time.
+
+# [v1](#tab/python-v1)
Here's the *function.json* file:
def main(req: func.HttpRequest, todoitems: func.DocumentList) -> str:
return 'OK' ``` ++ <a id="queue-trigger-get-multiple-docs-using-sqlquery-python"></a> ### Queue trigger, get multiple docs, using SqlQuery
-The following example shows an Azure Cosmos DB input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
+The following example shows an Azure Cosmos DB input binding Python function that uses the binding. The function retrieves multiple documents specified by a SQL query, using a queue trigger to customize the query parameters.
The queue trigger provides a parameter `departmentId`. A queue message of `{ "departmentId" : "Finance" }` would return all records for the finance department.
+# [v2](#tab/python-v2)
+
+No equivalent sample for v2 at this time.
+
+# [v1](#tab/python-v1)
+ Here's the binding data in the *function.json* file: ```json
Here's the binding data in the *function.json* file:
} ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `cosmos_db_input`:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The variable name used in function code that represents the list of documents with changes. |
+|`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |
+|`collection_name` | The name of the Azure CosmosDB collection being monitored. |
+|`connection_string_setting` | The connection string of the Azure Cosmos DB being monitored. |
+|`partition_key` | The partition key of the Azure Cosmos DB being monitored. |
+|`id` | The ID of the document to retrieve. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
From the [Java functions runtime library](/java/api/overview/azure/functions/run
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
Title: Azure Cosmos DB output binding for Functions 2.x and higher description: Learn to use the Azure Cosmos DB output binding in Azure Functions. Previously updated : 03/04/2022 Last updated : 03/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
The Azure Cosmos DB output binding lets you write a new document to an Azure Cos
For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example Unless otherwise noted, examples in this article target version 3.x of the [Azure Cosmos DB extension](functions-bindings-cosmosdb-v2.md). For use with extension version 4.x, you need to replace the string `collection` in property and attribute names with `container`.
For bulk insert form the objects first and then run the stringify function. Here
::: zone-end ::: zone pivot="programming-language-powershell"
-The following example show how to write data to Azure Cosmos DB using an output binding. The binding is declared in the function's configuration file (_functions.json_), and take data from a queue message and writes out to an Azure Cosmos DB document.
+The following example shows how to write data to Azure Cosmos DB using an output binding. The binding is declared in the function's configuration file (_functions.json_), and takes data from a queue message and writes out to an Azure Cosmos DB document.
```json {
Push-OutputBinding -Name EmployeeDocument -Value @{
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to write a document to an Azure Cosmos DB database as the output of a function.
+The following example demonstrates how to write a document to an Azure Cosmos DB database as the output of a function. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.route()
+@app.cosmos_db_output(arg_name="documents",
+ database_name="DB_NAME",
+ collection_name="COLLECTION_NAME",
+ create_if_not_exists=True,
+ connection_string_setting="CONNECTION_SETTING")
+def main(req: func.HttpRequest, documents: func.Out[func.Document]) -> func.HttpResponse:
+ request_body = req.get_body()
+ documents.set(func.Document.from_json(request_body))
+ return 'OK'
+```
+
+# [v1](#tab/python-v1)
The binding definition is defined in *function.json* where *type* is set to `cosmosDB`.
def main(req: func.HttpRequest, doc: func.Out[func.Document]) -> func.HttpRespon
return 'OK' ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
Both [in-process](functions-dotnet-class-library.md) and [isolated worker proces
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `cosmos_db_output`:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The variable name used in function code that represents the list of documents with changes. |
+|`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |
+|`collection_name` | The name of the Azure CosmosDB collection being monitored. |
+|`create_if_not_exists` | A Boolean value that indicates whether the database and collection should be created if they do not exist. |
+|`connection_string_setting` | The connection string of the Azure Cosmos DB being monitored. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
From the [Java functions runtime library](/java/api/overview/azure/functions/run
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
The Azure Cosmos DB Trigger uses the [Azure Cosmos DB change feed](../cosmos-db/
For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Write-Host "First document Id modified : $($Documents[0].id)"
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows an Azure Cosmos DB trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes log messages when Azure Cosmos DB records are modified.
+The following example shows an Azure Cosmos DB trigger binding. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
-Here's the binding data in the *function.json* file:
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="CosmosDBTrigger")
+@app.cosmos_db_trigger(arg_name="documents",
+ database_name="DB_NAME",
+ collection_name="COLLECTION_NAME",
+ connection_string_setting="CONNECTION_SETTING",
+ lease_collection_name="leases", create_lease_collection_if_not_exists="true")
+def test_function(documents: func.DocumentList) -> str:
+ if documents:
+ logging.info('Document id: %s', documents[0]['id'])
+```
+
+# [v1](#tab/python-v1)
+
+The function writes log messages when Azure Cosmos DB records are modified. Here's the binding data in the *function.json* file:
[!INCLUDE [functions-cosmosdb-trigger-attributes](../../includes/functions-cosmosdb-trigger-attributes.md)]
Here's the Python code:
logging.info('First document Id modified: %s', documents[0]['id']) ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotn
::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `cosmos_db_trigger`:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The variable name used in function code that represents the list of documents with changes. |
+|`database_name` | The name of the Azure Cosmos DB database with the collection being monitored. |
+|`collection_name` | The name of the Azure CosmosDB collection being monitored. |
+|`connection` | The connection string of the Azure Cosmos DB being monitored. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
From the [Java functions runtime library](/java/api/overview/azure/functions/run
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file, where properties differ by extension version:
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
description: Learn to write messages to Azure Event Hubs streams using Azure Fun
ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b Previously updated : 03/04/2022 Last updated : 03/03/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
Use the Event Hubs output binding to write events to an event stream. You must h
Make sure the required package references are in place before you try to implement an output binding.
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
++ ## Example ::: zone pivot="programming-language-csharp"
module.exports = function(context) {
Complete PowerShell examples are pending. ::: zone-end ::: zone pivot="programming-language-python"
-The following example shows an event hub trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function writes a message to an event hub.
+The following example shows an event hub trigger binding and a Python function that uses the binding. The function writes a message to an event hub. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="eventhub_output")
+@app.route(route="eventhub_output")
+@app.event_hub_output(arg_name="event",
+ event_hub_name="<EVENT_HUB_NAME>",
+ connection="<CONNECTION_SETTING>")
+def eventhub_output(req: func.HttpRequest, event: func.Out[str]):
+ body = req.get_body()
+ if body is not None:
+ event.set(body.decode('utf-8'))
+ else:
+ logging.info('req body is none')
+ return 'ok'
+```
+
+# [v1](#tab/python-v1)
The following examples show Event Hubs binding data in the *function.json* file.
def main(timer: func.TimerRequest) -> str:
return 'Message created at: {}'.format(timestamp) ``` ++ ::: zone-end ::: zone pivot="programming-language-java" The following example shows a Java function that writes a message containing the current time to an Event Hub.
The following table explains the binding configuration properties that you set i
::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `cosmos_db_trigger`:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The variable name used in function code that represents the event. |
+|`event_hub_name` | he name of the event hub. When the event hub name is also present in the connection string, that value overrides this property at runtime. |
+|`connection` | The name of an app setting or setting collection that specifies how to connect to Event Hubs. To learn more, see [Connections](#connections). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file, which differs by runtime version.
azure-functions Functions Bindings Event Hubs Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-trigger.md
Title: Azure Event Hubs trigger for Azure Functions
description: Learn to use Azure Event Hubs trigger in Azure Functions. ms.assetid: daf81798-7acc-419a-bc32-b5a41c6db56b Previously updated : 03/04/2022 Last updated : 03/03/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
Title: Azure Functions HTTP trigger description: Learn how to call an Azure Function via HTTP. Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
For more information about HTTP bindings, see the [overview](./functions-binding
[!INCLUDE [HTTP client best practices](../../includes/functions-http-client-best-practices.md)]
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
++ ## Example ::: zone pivot="programming-language-csharp"
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows a trigger binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request.
+The following example shows a trigger binding and a Python function that uses the binding. The function looks for a `name` parameter either in the query string or the body of the HTTP request. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import azure.functions as func
+import logging
+
+app = func.FunctionApp()
+
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
+def test_function(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+ return func.HttpResponse(
+ "This HTTP triggered function executed successfully.",
+ status_code=200
+ )
+```
+
+# [v1](#tab/python-v1)
Here's the *function.json* file:
def main(req: func.HttpRequest) -> func.HttpResponse:
) ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
The following table explains the trigger configuration properties that you set i
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties for a trigger are defined in the `route` decorator, which adds HttpTrigger and HttpOutput binding:
+
+| Property | Description |
+|-|--|
+| `route` | Route for the http endpoint, if None, it will be set to function name if present or user defined python function name. |
+| `trigger_arg_name` | Argument name for HttpRequest, defaults to 'req'. |
+| `binding_arg_name` | Argument name for HttpResponse, defaults to '$return'. |
+| `methods` | A tuple of the HTTP methods to which the function responds. |
+| `auth_level` | Determines what keys, if any, need to be present on the request in order to invoke the function. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the trigger configuration properties that you set in the *function.json* file, which differs by runtime version.
public class HttpTriggerJava {
``` ::: zone-end As an example, the following *function.json* file defines a `route` property for an HTTP trigger with two parameters, `category` and `id`:
As an example, the following *function.json* file defines a `route` property for
} ``` +
+As an example, the following code defines a `route` property for an HTTP trigger with two parameters, `category` and `id`:
+
+# [v2](#tab/python-v2)
+
+```python
+@app.function_name(name="httpTrigger")
+@app.route(route="products/{category:alpha}/{id:int?}")
+```
+
+# [v1](#tab/python-v1)
+
+In the *function.json* file:
+
+```json
+{
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "name": "req",
+ "direction": "in",
+ "methods": [ "get" ],
+ "route": "products/{category:alpha}/{id:int?}"
+ },
+ {
+ "type": "http",
+ "name": "res",
+ "direction": "out"
+ }
+ ]
+}
+```
+++ ::: zone-end ::: zone pivot="programming-language-javascript"
Route parameters that defined a function's `route` pattern are available to each
The following configuration shows how the `{id}` parameter is passed to the binding's `rowKey`.
+# [v2](#tab/python-v2)
+
+```python
+@app.table_input(arg_name="product", table_name="products",
+ row_key="{id}", partition_key="products",
+ connection="AzureWebJobsStorage")
+```
+
+# [v1](#tab/python-v1)
+ ```json { "type": "table",
The following configuration shows how the `{id}` parameter is passed to the bind
} ``` ++ When you use route parameters, an `invoke_URL_template` is automatically created for your function. Your clients can use the URL template to understand the parameters they need to pass in the URL when calling your function using its URL. Navigate to one of your HTTP-triggered functions in the [Azure portal](https://portal.azure.com) and select **Get function URL**. You can programmatically access the `invoke_URL_template` by using the Azure Resource Manager APIs for [List Functions](/rest/api/appservice/webapps/listfunctions) or [Get Function](/rest/api/appservice/webapps/getfunction).
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
Title: Azure Service Bus output bindings for Azure Functions
description: Learn to send Azure Service Bus messages from Azure Functions. ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
Use Azure Service Bus output binding to send queue or topic messages.
For information on setup and configuration details, see the [overview](functions-bindings-service-bus.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Push-OutputBinding -Name outputSbMsg -Value @{
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to write out to a Service Bus queue in Python.
+The following example demonstrates how to write out to a Service Bus queue in Python. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.route(route="put_message")
+@app.service_bus_topic_output(arg_name="message",
+ connection="<CONNECTION_SETTING>",
+ topic_name="<TOPIC_NAME>")
+def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
+ input_msg = req.params.get('message')
+ message.set(input_msg)
+ return 'OK'
+```
+
+# [v1](#tab/python-v1)
A Service Bus binding definition is defined in *function.json* where *type* is set to `serviceBus`.
C# script uses a *function.json* file for configuration instead of attributes. T
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `service_bus_topic_output`:
+
+| Property | Description |
+|-|--|
+| `arg_name` | The name of the variable that represents the queue or topic message in function code. |
+| `queue_name` | Name of the queue. Set only if sending queue messages, not for a topic. |
+| `topic_name` | Name of the topic. Set only if sending topic messages, not for a queue. |
+| `connection` | The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The `ServiceBusQueueOutput` and `ServiceBusTopicOutput` annotations are availabl
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file and the `ServiceBus` attribute.
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md
Title: Azure Service Bus trigger for Azure Functions
description: Learn to run an Azure Function when as Azure Service Bus messages are created. ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
Starting with extension version 3.1.0, you can trigger on a session-enabled queu
For information on setup and configuration details, see the [overview](functions-bindings-service-bus.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Write-Host "PowerShell ServiceBus queue trigger function processed message: $myS
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to read a Service Bus queue message via a trigger.
+The following example demonstrates how to read a Service Bus queue message via a trigger. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
-A Service Bus binding is defined in *function.json* where *type* is set to `serviceBusTrigger`.
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="ServiceBusQueueTrigger1")
+@app.service_bus_queue_trigger(arg_name="msg",
+ queue_name="<QUEUE_NAME>",
+ connection="<CONNECTION_SETTING">)
+def test_function(msg: func.ServiceBusMessage):
+ logging.info('Python ServiceBus queue trigger processed message: %s',
+ msg.get_body().decode('utf-8'))
+```
+
+# [v1](#tab/python-v1)
+
+A Service Bus binding is defined in *function.json* where *type* is set to `serviceBusTrigger` and the queue is set by `queueName`.
```json {
def main(msg: func.ServiceBusMessage):
logging.info(result) ```+++
+The following example demonstrates how to read a Service Bus queue topic via a trigger.
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="ServiceBusTopicTrigger1")
+@app.service_bus_topic_trigger(arg_name="message",
+ topic_name="TOPIC_NAME",
+ connection="CONNECTION_SETTING",
+ subscription_name="SUBSCRIPTION_NAME")
+def test_function(message: func.ServiceBusMessage):
+ message_body = message.get_body().decode("utf-8")
+ logging.info("Python ServiceBus topic trigger processed message.")
+ logging.info("Message Body: " + message_body)
+```
+
+# [v1](#tab/python-v1)
+
+A Service Bus binding is defined in *function.json* where *type* is set to `serviceBusTrigger` and the topic is set by `topicName`.
+
+```json
+{
+ "scriptFile": "__init__.py",
+ "bindings": [
+   {
+     "type": "serviceBusTrigger",
+     "direction": "in",
+     "name": "msg",
+     "topicName": "inputtopic",
+     "connection": "AzureServiceBusConnectionString"
+   }
+ ]
+}
+```
+
+The code in *_\_init_\_.py* declares a parameter as `func.ServiceBusMessage`, which allows you to read the topic in your function.
+
+```python
+import json
+
+import azure.functions as azf
++
+def main(msg: azf.ServiceBusMessage) -> str:
+ result = json.dumps({
+ 'message_id': msg.message_id,
+ 'body': msg.get_body().decode('utf-8'),
+ 'content_type': msg.content_type,
+ 'delivery_count': msg.delivery_count,
+ 'expiration_time': (msg.expiration_time.isoformat() if
+ msg.expiration_time else None),
+ 'label': msg.label,
+ 'partition_key': msg.partition_key,
+ 'reply_to': msg.reply_to,
+ 'reply_to_session_id': msg.reply_to_session_id,
+ 'scheduled_enqueue_time': (msg.scheduled_enqueue_time.isoformat() if
+ msg.scheduled_enqueue_time else None),
+ 'session_id': msg.session_id,
+ 'time_to_live': msg.time_to_live,
+ 'to': msg.to,
+ 'user_properties': msg.user_properties,
+ })
+
+ logging.info(result)
+```
+++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
C# script uses a *function.json* file for configuration instead of attributes. T
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `service_bus_queue_trigger`:
+
+| Property | Description |
+|-|--|
+| `arg_name` | The name of the variable that represents the queue or topic message in function code. |
+| `queue_name` | Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |
+| `connection` | The name of an app setting or setting collection that specifies how to connect to Service Bus. See [Connections](#connections). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
See the trigger [example](#example) for more detail.
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Storage Blob Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md
Title: Azure Blob storage input binding for Azure Functions description: Learn how to provide Azure Blob storage input binding data to an Azure Function. Previously updated : 03/04/2022 Last updated : 03/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
The input binding allows you to read blob storage data as input to an Azure Func
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($Inp
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows blob input and output bindings in a *function.json* file and [Python code](functions-reference-python.md) that uses the bindings. The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
+The following example shows blob input and output bindings. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+The code creates a copy of a blob.
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="BlobOutput1")
+@app.route(route="file")
+@app.blob_input(arg_name="inputblob",
+ path="sample-workitems/test.txt",
+ connection="<BLOB_CONNECTION_SETTING>")
+@app.blob_output(arg_name="outputblob",
+ path="newblob/test.txt",
+ connection="<BLOB_CONNECTION_SETTING>")
+def main(req: func.HttpRequest, inputblob: str, outputblob: func.Out[str]):
+ logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
+ outputblob.set(inputblob)
+ return "ok"
+```
+
+# [v1](#tab/python-v1)
+
+The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
def main(queuemsg: func.QueueMessage, inputblob: bytes) -> bytes:
return inputblob ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
The following table explains the binding configuration properties for C# script
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using decorators, the following properties on the `blob_input` and `blob_output` decorators define the Blob Storage triggers:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The name of the variable that represents the blob in function code. |
+|`path` | The path to the blob For the `blob_input` decorator, it's the blob read. For the `blob_output` decorator, it's the output or copy of the input blob. |
+|`connection` | The storage account connection string. |
+|`data_type` | For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The `@BlobInput` attribute gives you access to the blob that triggered the funct
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Storage Blob Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md
Title: Azure Blob storage output binding for Azure Functions description: Learn how to provide Azure Blob storage output binding data to an Azure Function. Previously updated : 03/04/2022 Last updated : 03/02/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
The output binding allows you to modify and delete blob storage data in an Azure
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Push-OutputBinding -Name myOutputBlob -Value $myInputBlob
<!--Same example for input and output. -->
-The following example shows blob input and output bindings in a *function.json* file and [Python code](functions-reference-python.md) that uses the bindings. The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
+The following example shows blob input and output bindings. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+The code creates a copy of a blob.
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="BlobOutput1")
+@app.route(route="file")
+@app.blob_input(arg_name="inputblob",
+ path="sample-workitems/test.txt",
+ connection="<BLOB_CONNECTION_SETTING>")
+@app.blob_output(arg_name="outputblob",
+ path="newblob/test.txt",
+ connection="<BLOB_CONNECTION_SETTING>")
+def main(req: func.HttpRequest, inputblob: str, outputblob: func.Out[str]):
+ logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
+ outputblob.set(inputblob)
+ return "ok"
+```
+
+# [v1](#tab/python-v1)
+
+The function makes a copy of a blob. The function is triggered by a queue message that contains the name of the blob to copy. The new blob is named *{originalblobname}-Copy*.
In the *function.json* file, the `queueTrigger` metadata property is used to specify the blob name in the `path` properties:
The following table explains the binding configuration properties for C# script
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using decorators, the following properties on the `blob_input` and `blob_output` decorators define the Blob Storage triggers:
+
+| Property | Description |
+|-|--|
+|`arg_name` | The name of the variable that represents the blob in function code. |
+|`path` | The path to the blob For the `blob_input` decorator, it's the blob read. For the `blob_output` decorator, it's the output or copy of the input blob. |
+|`connection` | The storage account connection string. |
+|`dataType` | For dynamically typed languages, specifies the underlying data type. Possible values are `string`, `binary`, or `stream`. For more detail, refer to the [triggers and bindings concepts](functions-triggers-bindings.md?tabs=python#trigger-and-binding-definitions). |
++
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The `@BlobOutput` attribute gives you access to the blob that triggered the func
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Storage Blob Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md
Title: Azure Blob storage trigger for Azure Functions description: Learn how to run an Azure Function as Azure Blob storage data changes. Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
There are several ways to execute your function code based on changes to blobs i
For information on setup and configuration details, see the [overview](./functions-bindings-storage-blob.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($Inp
::: zone-end ::: zone pivot="programming-language-python"
-The following example shows a blob trigger binding in a *function.json* file and [Python code](functions-reference-python.md) that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
+The following example shows a blob trigger binding. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="BlobTrigger1")
+@app.blob_trigger(arg_name="myblob",
+ path="PATH/TO/BLOB",
+ connection="CONNECTION_SETTING")
+def test_function(myblob: func.InputStream):
+ logging.info(f"Python blob trigger function processed blob \n"
+ f"Name: {myblob.name}\n"
+ f"Blob Size: {myblob.length} bytes")
+```
+
+# [v1](#tab/python-v1)
+
+The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources).
Here's the *function.json* file: ```json
def main(myblob: func.InputStream):
``` ::: zone-end +++ ::: zone pivot="programming-language-csharp" ## Attributes
C# script uses a *function.json* file for configuration instead of attributes.
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using decorators, the following properties on the `blob_trigger` decorator define the Blob Storage trigger:
+
+| Property | Description |
+|-|--|
+|`arg_name` | Declares the parameter name in the function signature. When the function is triggered, this parameter's value has the contents of the queue message. |
+|`path` | The [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources) to monitor. May be a [blob name pattern](#blob-name-patterns). |
+|`connection` | The storage account connection string. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
::: zone pivot="programming-language-java" ## Annotations
The `@BlobTrigger` attribute is used to give you access to the blob that trigger
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file. |function.json property |Description|
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
Title: Azure Queue storage output binding for Azure Functions description: Learn to create Azure Queue storage messages in Azure Functions. Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
Azure Functions can create new Azure Queue storage messages by setting up an out
For information on setup and configuration details, see the [overview](./functions-bindings-storage-queue.md).
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to output single and multiple values to storage queues. The configuration needed for *function.json* is the same either way.
+The following example demonstrates how to output single and multiple values to storage queues. The configuration needed for *function.json* is the same either way. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="QueueOutput1")
+@app.route(route="message")
+@app.queue_output(arg_name="msg",
+ queue_name="<QUEUE_NAME>",
+ connection="<CONNECTION_SETTING>")
+def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
+ input_msg = req.params.get('name')
+ logging.info(input_msg)
+
+ msg.set(input_msg)
+
+ logging.info(f'name: {name}')
+ return 'OK'
+```
+# [v1](#tab/python-v1)
A Storage queue binding is defined in *function.json* where *type* is set to `queue`.
def main(req: func.HttpRequest, msg: func.Out[typing.List[str]]) -> func.HttpRes
return 'OK' ``` ++ ::: zone-end ::: zone pivot="programming-language-csharp" ## Attributes
The following table explains the binding configuration properties that you set i
|**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections).| ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `queue_output`:
+
+| Property | Description |
+|-|--|
+| `arg_name` | The name of the variable that represents the queue in function code. |
+| `queue_name` | The name of the queue. |
+| `connection` | The name of an app setting or setting collection that specifies how to connect to Azure Queues. See [Connections](#connections). |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The parameter associated with the [QueueOutput](/java/api/com.microsoft.azure.fu
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
+ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Title: Azure Queue storage trigger for Azure Functions description: Learn to run an Azure Function as Azure Queue storage data changes. Previously updated : 03/04/2022 Last updated : 02/27/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
The queue storage trigger runs a function as messages are added to Azure Queue storage.
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
Here's the *function.json* file:
} ```
-The [configuration](#configuration) section explains these properties.
+The [section below](#attributes) explains these properties.
Here's the C# script code:
Write-Host "Dequeue count: $($TriggerMetadata.DequeueCount)"
::: zone-end ::: zone pivot="programming-language-python"
-The following example demonstrates how to read a queue message passed to a function via a trigger.
+The following example demonstrates how to read a queue message passed to a function via a trigger. The example depends on whether you use the v1 or v2 Python programming model.
+
+# [v2](#tab/python-v2)
+
+```python
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="QueueFunc")
+@app.queue_trigger(arg_name="msg", queue_name="inputqueue",
+ connection="storageAccountConnectionString") # Queue trigger
+@app.write_queue(arg_name="outputQueueItem", queue_name="outqueue",
+ connection="storageAccountConnectionString") # Queue output binding
+def test_function(msg: func.QueueMessage,
+ outputQueueItem: func.Out[str]) -> None:
+ logging.info('Python queue trigger function processed a queue item: %s',
+ msg.get_body().decode('utf-8'))
+ outputQueueItem.set('hello')
+```
+
+# [v1](#tab/python-v1)
A Storage queue trigger is defined in *function.json* where *type* is set to `queueTrigger`.
def main(msg: func.QueueMessage):
logging.info(result) ```+ ::: zone-end ::: zone pivot="programming-language-csharp"
public class QueueTriggerDemo {
|`connection` | Points to the storage account connection string. | ::: zone-end
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using decorators, the following properties on the `queue_trigger` decorator define the Queue Storage trigger:
+
+| Property | Description |
+|-|--|
+|`arg_name` | Declares the parameter name in the function signature. When the function is triggered, this parameter's value has the contents of the queue message. |
+|`queue_name` | Declares the queue name in the storage account. |
+|`connection` | Points to the storage account connection string. |
+
+For Python functions defined by using function.json, see the Configuration section.
::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
The following table explains the binding configuration properties that you set in the *function.json* file and the `QueueTrigger` attribute. |function.json property | Description|
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
Title: Timer trigger for Azure Functions
description: Understand how to use timer triggers in Azure Functions. ms.assetid: d2f013d1-f458-42ae-baf8-1810138118ac Previously updated : 03/04/2022 Last updated : 03/06/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
For information on how to manually run a timer-triggered function, see [Manually
Source code for the timer extension package is in the [azure-webjobs-sdk-extensions](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Timers/) GitHub repository.
+Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
+
+# [v2](#tab/python-v2)
+The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-decorators#programming-model).
+
+# [v1](#tab/python-v1)
+The Python v1 programming model requires you to define bindings in a separate *function.json* file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model).
+++
+This article supports both programming models.
+
+> [!IMPORTANT]
+> The Python v2 programming model is currently in preview.
+ ## Example ::: zone pivot="programming-language-csharp"
public void keepAlive(
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-python,programming-language-powershell"
-The following example shows a timer trigger binding in a *function.json* file and function code that uses the binding, where an instance representing the timer is passed to the function. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence.
+The following example shows a timer trigger binding and function code that uses the binding, where an instance representing the timer is passed to the function. The function writes a log indicating whether this function invocation is due to a missed schedule occurrence. The example depends on whether you use the [v1 or v2 Python programming model](functions-reference-python.md).
+
+# [v2](#tab/python-v2)
+
+```python
+import datetime
+import logging
+import azure.functions as func
+
+app = func.FunctionApp()
+
+@app.function_name(name="mytimer")
+@app.schedule(schedule="0 */5 * * * *",
+ arg_name="mytimer",
+ run_on_startup=True)
+def test_function(mytimer: func.TimerRequest) -> None:
+ utc_timestamp = datetime.datetime.utcnow().replace(
+ tzinfo=datetime.timezone.utc).isoformat()
+ if mytimer.past_due:
+ logging.info('The timer is past due!')
+ logging.info('Python timer trigger function ran at %s', utc_timestamp)
+```
+
+# [v1](#tab/python-v1)
Here's the binding data in the *function.json* file:
module.exports = async function (context, myTimer) {
}; ``` ++ ::: zone-end ::: zone pivot="programming-language-powershell"
The following table explains the binding configuration properties for C# script
::: zone-end +
+## Decorators
+
+_Applies only to the Python v2 programming model._
+
+For Python v2 functions defined using a decorator, the following properties on the `schedule`:
+
+| Property | Description |
+|-|--|
+| `arg_name` | The name of the variable that represents the timer object in function code. |
+| `schedule` | A [CRON expression](#ncrontab-expressions) or a [TimeSpan](#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". |
+| `run_on_startup` | If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. |
+| `use_monitor` | Set to `true` or `false` to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is `true` for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is `false`. |
+
+For Python functions defined by using *function.json*, see the [Configuration](#configuration) section.
+ ::: zone pivot="programming-language-java" ## Annotations
The `@TimerTrigger` annotation on the function defines the `schedule` using the
::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration+
+_Applies only to the Python v1 programming model._
++ The following table explains the binding configuration properties that you set in the *function.json* file.
azure-functions Functions Bindings Triggers Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-triggers-python.md
- Title: Python V2 model Azure Functions triggers and bindings
-description: Provides examples of how to define Python triggers and bindings in Azure Functions using the preview v2 model
- Previously updated : 10/25/2022---
-# Python V2 model Azure Functions triggers and bindings (preview)
-
-The new Python v2 programming model in Azure Functions is intended to provide better alignment with Python development principles and with commonly used Python frameworks.
-
-The improved v2 programming model requires fewer files than the default model (v1), and specifically eliminates the need for a configuration file (`function.json`). Instead, triggers and bindings are represented in the `function_app.py` file as decorators. Moreover, functions can be logically organized with support for multiple functions to be stored in the same file. Functions within the same function application can also be stored in different files, and be referenced as blueprints.
-
-To learn more about using the new Python programming model for Azure Functions, see the [Azure Functions Python developer guide](./functions-reference-python.md). In addition to the documentation, [hints](https://aka.ms/functions-python-hints) are available in code editors that support type checking with .pyi files.
-
-This article contains example code snippets that define various triggers and bindings using the Python v2 programming model. To be able to run the code snippets below, ensure the following:
--- The function application is defined and named `app`.-- Confirm that the parameters within the trigger reflect values that correspond with your storage account.-- The name of the file the function is in must be `function_app.py`.-
-To create your first function in the new v2 model, see one of these quickstart articles:
-
-+ [Get started with Visual Studio](./create-first-function-vs-code-python.md)
-+ [Get started command prompt](./create-first-function-cli-python.md)
-
-## Azure Blob storage trigger
-
-The following code snippet defines a function triggered from Azure Blob Storage:
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="BlobTrigger1")
-@app.blob_trigger(arg_name="myblob", path="samples-workitems/{name}",
- connection="AzureWebJobsStorage")
-def test_function(myblob: func.InputStream):
- logging.info(f"Python blob trigger function processed blob \n"
- f"Name: {myblob.name}\n"
- f"Blob Size: {myblob.length} bytes")
-```
-
-## Azure Blob storage input binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="BlobInput1")
-@app.route(route="file")
-@app.blob_input(arg_name="inputblob",
- path="sample-workitems/{name}",
- connection="AzureWebJobsStorage")
-def test(req: func.HttpRequest, inputblob: bytes) -> func.HttpResponse:
- logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
- return inputblob
-```
-
-## Azure Blob storage output binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="BlobOutput1")
-@app.route(route="file")
-@app.blob_input(arg_name="inputblob",
- path="sample-workitems/test.txt",
- connection="AzureWebJobsStorage")
-@app.blob_output(arg_name="outputblob",
- path="newblob/test.txt",
- connection="AzureWebJobsStorage")
-def main(req: func.HttpRequest, inputblob: str, outputblob: func.Out[str]):
- logging.info(f'Python Queue trigger function processed {len(inputblob)} bytes')
- outputblob.set(inputblob)
- return "ok"
-```
-
-## Azure Cosmos DB trigger
-
-The following code snippet defines a function triggered from an Azure Cosmos DB (SQL API):
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="CosmosDBTrigger1")
-@app.cosmos_db_trigger(arg_name="documents", database_name="<DB_NAME>", collection_name="<COLLECTION_NAME>", connection_string_setting=""AzureWebJobsStorage"",
- lease_collection_name="leases", create_lease_collection_if_not_exists="true")
-def test_function(documents: func.DocumentList) -> str:
- if documents:
- logging.info('Document id: %s', documents[0]['id'])
-```
-
-## Azure Cosmos DB input binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.route()
-@app.cosmos_db_input(
- arg_name="documents", database_name="<DB_NAME>",
- collection_name="<COLLECTION_NAME>",
- connection_string_setting="CONNECTION_SETTING")
-def cosmosdb_input(req: func.HttpRequest, documents: func.DocumentList) -> str:
- return func.HttpResponse(documents[0].to_json())
-```
-
-## Azure Cosmos DB output binding
-
-```python
-import logging
-import azure.functions as func
-@app.route()
-@app.cosmos_db_output(
- arg_name="documents", database_name="<DB_NAME>",
- collection_name="<COLLECTION_NAME>",
- create_if_not_exists=True,
- connection_string_setting="CONNECTION_SETTING")
-def main(req: func.HttpRequest, documents: func.Out[func.Document]) -> func.HttpResponse:
- request_body = req.get_body()
- documents.set(func.Document.from_json(request_body))
- return 'OK'
-```
-
-## Azure EventHub trigger
-
-The following code snippet defines a function triggered from an event hub instance:
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="EventHubTrigger1")
-@app.event_hub_message_trigger(arg_name="myhub", event_hub_name="samples-workitems",
- connection=""CONNECTION_SETTING"")
-def test_function(myhub: func.EventHubEvent):
- logging.info('Python EventHub trigger processed an event: %s',
- myhub.get_body().decode('utf-8'))
-```
-
-## Azure EventHub output binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="eventhub_output")
-@app.route(route="eventhub_output")
-@app.event_hub_output(arg_name="event",
- event_hub_name="samples-workitems",
- connection="CONNECTION_SETTING")
-def eventhub_output(req: func.HttpRequest, event: func.Out[str]):
- body = req.get_body()
- if body is not None:
- event.set(body.decode('utf-8'))
- else:
- logging.info('req body is none')
- return 'ok'
-```
-
-## HTTP trigger
-
-The following code snippet defines an HTTP triggered function:
-
-```python
-import azure.functions as func
-import logging
-app = func.FunctionApp(auth_level=func.AuthLevel.ANONYMOUS)
-@app.function_name(name="HttpTrigger1")
-@app.route(route="hello")
-def test_function(req: func.HttpRequest) -> func.HttpResponse:
- logging.info('Python HTTP trigger function processed a request.')
- name = req.params.get('name')
- if not name:
- try:
- req_body = req.get_json()
- except ValueError:
- pass
- else:
- name = req_body.get('name')
- if name:
- return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
- else:
- return func.HttpResponse(
- "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
- status_code=200
- )
-```
-
-## Azure Queue storage trigger
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="QueueTrigger1")
-@app.queue_trigger(arg_name="msg", queue_name="python-queue-items",
- connection=""AzureWebJobsStorage"")
-def test_function(msg: func.QueueMessage):
- logging.info('Python EventHub trigger processed an event: %s',
- msg.get_body().decode('utf-8'))
-```
-
-## Azure Queue storage output binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="QueueOutput1")
-@app.route(route="message")
-@app.queue_output(arg_name="msg", queue_name="python-queue-items", connection="AzureWebJobsStorage")
-def main(req: func.HttpRequest, msg: func.Out[str]) -> func.HttpResponse:
- input_msg = req.params.get('name')
- msg.set(input_msg)
- logging.info(input_msg)
- logging.info('name: {name}')
- return 'OK'
-```
-
-## Azure Service Bus queue trigger
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="ServiceBusQueueTrigger1")
-@app.service_bus_queue_trigger(arg_name="msg", queue_name="myinputqueue", connection="CONNECTION_SETTING")
-def test_function(msg: func.ServiceBusMessage):
- logging.info('Python ServiceBus queue trigger processed message: %s',
- msg.get_body().decode('utf-8'))
-```
-
-## Azure Service Bus topic trigger
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="ServiceBusTopicTrigger1")
-@app.service_bus_topic_trigger(arg_name="message", topic_name="mytopic", connection="CONNECTION_SETTING", subscription_name="testsub")
-def test_function(message: func.ServiceBusMessage):
- message_body = message.get_body().decode("utf-8")
- logging.info("Python ServiceBus topic trigger processed message.")
- logging.info("Message Body: " + message_body)
-```
-
-## Azure Service Bus Topic output binding
-
-```python
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.route(route="put_message")
-@app.service_bus_topic_output(
- arg_name="message",
- connection="CONNECTION_SETTING",
- topic_name="mytopic")
-def main(req: func.HttpRequest, message: func.Out[str]) -> func.HttpResponse:
- input_msg = req.params.get('message')
- message.set(input_msg)
- return 'OK'
-```
-
-## Timer trigger
-
-```python
-import datetime
-import logging
-import azure.functions as func
-app = func.FunctionApp()
-@app.function_name(name="mytimer")
-@app.schedule(schedule="0 */5 * * * *", arg_name="mytimer", run_on_startup=True,
- use_monitor=False)
-def test_function(mytimer: func.TimerRequest) -> None:
- utc_timestamp = datetime.datetime.utcnow().replace(
- tzinfo=datetime.timezone.utc).isoformat()
- if mytimer.past_due:
- logging.info('The timer is past due!')
- logging.info('Python timer trigger function ran at %s', utc_timestamp)
-```
-
-## Durable Functions
-
-Durable Functions also provides preview support of the V2 programming model. To try it out, install the Durable Functions SDK (PyPI package `azure-functions-durable`) from version `1.2.2` or greater. You can reach us in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python) with feedback and suggestions.
--
-> [!NOTE]
-> Using [Extension Bundles](./functions-bindings-register.md#extension-bundles) is not currently supported when trying out the Python V2 programming model with Durable Functions, so you will need to manage your extensions manually.
-> To do this, remove the `extensionBundle` section of your `host.json` as described [here](./functions-run-local.md#install-extensions) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience.
-
-The Durable Functions Triggers and Bindings may be accessed from an instance `DFApp`, a subclass of `FunctionApp` that additionally exports Durable Functions-specific decorators.
-
-Below is a simple Durable Functions app that declares a simple sequential orchestrator, all in one file!
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-# An HTTP-Triggered Function with a Durable Functions Client binding
-@myApp.route(route="orchestrators/{functionName}")
-@myApp.durable_client_input(client_name="client")
-async def durable_trigger(req: func.HttpRequest, client):
- function_name = req.route_params.get('functionName')
- instance_id = await client.start_new(function_name)
- response = client.create_check_status_response(req, instance_id)
- return response
-
-# Orchestrator
-@myApp.orchestration_trigger(context_name="context")
-def my_orchestrator(context):
- result1 = yield context.call_activity("hello", "Seattle")
- result2 = yield context.call_activity("hello", "Tokyo")
- result3 = yield context.call_activity("hello", "London")
-
- return [result1, result2, result3]
-
-# Activity
-@myApp.activity_trigger(input_name="myInput")
-def hello(myInput: str):
- return "Hello " + myInput
-```
-
-> [!NOTE]
-> Previously, Durable Functions orchestrators needed an extra line of boilerplate, usually at the end of the file, to be indexed:
-> `main = df.Orchestrator.create(<name_of_orchestrator_function>)`.
-> This is no longer needed in V2 of the Python programming model. This applies to Entities as well, which required a similar boilerplate through
-> `main = df.Entity.create(<name_of_entity_function>)`.
-
-For reference, all Durable Functions Triggers and Bindings are listed below:
-
-### Orchestration Trigger
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-@myApp.orchestration_trigger(context_name="context")
-def my_orchestrator(context):
- result = yield context.call_activity("Hello", "Tokyo")
- return result
-```
-
-### Activity Trigger
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-@myApp.activity_trigger(input_name="myInput")
-def my_activity(myInput: str):
- return "Hello " + myInput
-```
-
-### DF Client Binding
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-@myApp.route(route="orchestrators/{functionName}")
-@myApp.durable_client_input(client_name="client")
-async def durable_trigger(req: func.HttpRequest, client):
- function_name = req.route_params.get('functionName')
- instance_id = await client.start_new(function_name)
- response = client.create_check_status_response(req, instance_id)
- return response
-```
-
-### Entity Trigger
-
-```python
-import azure.functions as func
-import azure.durable_functions as df
-
-myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)
-
-@myApp.entity_trigger(context_name="context")
-def entity_function(context):
- current_value = context.get_state(lambda: 0)
- operation = context.operation_name
- if operation == "add":
- amount = context.get_input()
- current_value += amount
- elif operation == "reset":
- current_value = 0
- elif operation == "get":
- pass
-
- context.set_state(current_value)
- context.set_result(current_value)
-```
-
-## Next steps
-
-+ [Python developer guide](./functions-reference-python.md)
-+ [Get started with Visual Studio](./create-first-function-vs-code-python.md)
-+ [Get started command prompt](./create-first-function-cli-python.md)
azure-functions Functions Create Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-vnet.md
Title: Use private endpoints to integrate Azure Functions with a virtual network description: This tutorial shows you how to connect a function to an Azure virtual network and lock it down by using private endpoints. Previously updated : 2/10/2023 Last updated : 3/24/2023 #Customer intent: As an enterprise developer, I want to create a function that can connect to a virtual network with private endpoints to secure my function app. # Tutorial: Integrate Azure Functions with an Azure virtual network by using private endpoints
-This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using private endpoints. You'll create a function by using a storage account that's locked behind a virtual network. The virtual network uses a Service Bus queue trigger.
+This tutorial shows you how to use Azure Functions to connect to resources in an Azure virtual network by using private endpoints. You create a new function app using a new storage account that's locked behind a virtual network via the Azure portal. The virtual network uses a Service Bus queue trigger.
In this tutorial, you'll: > [!div class="checklist"]
-> * Create a function app in the Premium plan.
-> * Create Azure resources, such as the Service Bus, storage account, and virtual network.
-> * Lock down your storage account behind a private endpoint.
+> * Create a function app in the Elastic Premium plan with virtual network integration and private endpoints.
+> * Create Azure resources, such as the Service Bus
> * Lock down your Service Bus behind a private endpoint. > * Deploy a function app that uses both the Service Bus and HTTP triggers.
-> * Lock down your function app behind a private endpoint.
> * Test to see that your function app is secure inside the virtual network. > * Clean up resources. ## Create a function app in a Premium plan
-You'll create a .NET function app in the Premium plan because this tutorial uses C#. Other languages are also supported in Windows. The Premium plan provides serverless scale while supporting virtual network integration.
+You create a C# function app in an [Elastic Premium plan](./functions-premium-plan.md), which supports networking capabilities such as virtual network integration on create along with serverless scale. This tutorial uses C# and Windows. Other languages and Linux are also supported.
1. On the Azure portal menu or the **Home** page, select **Create a resource**.
You'll create a .NET function app in the Premium plan because this tutorial uses
| Setting | Suggested value | Description | | | - | -- | | **Subscription** | Your subscription | Subscription under which this new function app is created. |
- | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you'll create your function app. |
+ | **[Resource Group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Name for the new resource group where you create your function app. |
| **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. | |**Publish**| Code | Choose to publish code files or a Docker container. | | **Runtime stack** | .NET | This tutorial uses .NET. | | **Version** | 6 | This tutorial uses .NET 6.0 running [in the same process as the Functions host](./functions-dotnet-class-library.md). | |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. |
+ |**Operating system**| Windows | This tutorial uses Windows but also works for Linux. |
+ | **[Plan](./functions-scale.md)** | Premium | Hosting plan that defines how resources are allocated to your function app. By default, when you select **Premium**, a new App Service plan is created. The default **Sku and size** is **EP1**, where *EP* stands for _elastic premium_. For more information, see the list of [Premium SKUs](./functions-premium-plan.md#available-instance-skus).<br/><br/>When you run JavaScript functions on a Premium plan, choose an instance that has fewer vCPUs. For more information, see [Choose single-core Premium plans](./functions-reference-node.md#considerations-for-javascript-functions). |
1. Select **Next: Hosting**. On the **Hosting** page, enter the following settings. | Setting | Suggested value | Description | | | - | -- |
- | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters long. They may contain numbers and lowercase letters only. You can also use an existing account, which must meet the [storage account requirements](./storage-considerations.md#storage-account-requirements). |
- |**Operating system**| Windows | This tutorial uses Windows. |
- | **[Plan](./functions-scale.md)** | Premium | Hosting plan that defines how resources are allocated to your function app. By default, when you select **Premium**, a new App Service plan is created. The default **Sku and size** is **EP1**, where *EP* stands for _elastic premium_. For more information, see the list of [Premium SKUs](./functions-premium-plan.md#available-instance-skus).<br/><br/>When you run JavaScript functions on a Premium plan, choose an instance that has fewer vCPUs. For more information, see [Choose single-core Premium plans](./functions-reference-node.md#considerations-for-javascript-functions). |
+ | **[Storage account](../storage/common/storage-account-create.md)** | Globally unique name | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters long. They may contain numbers and lowercase letters only. You can also use an existing account that isn't restricted by firewall rules and meets the [storage account requirements](./storage-considerations.md#storage-account-requirements). When using Functions with a locked down storage account, a v2 storage account is needed. This is the default storage version created when creating a function app with networking capabilities through the create blade. |
+
+1. Select **Next: Networking**. On the **Networking** page, enter the following settings.
+
+ > [!NOTE]
+ > Some of these settings aren't visible until other options are selected.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Enable network injection** | On | The ability to configure your application with VNet integration at creation appears in the portal window after this option is switched to **On**. |
+ | **Virtual Network** | Create New | Select the **Create New** field. In the pop-out screen, provide a name for your virtual network and select **Ok**. Options to restrict inbound and outbound access to your function app on create are displayed. You must explicitly enable VNet integration in the **Outbound access** portion of the window to restrict outbound access. |
+
+ Enter the following settings for the **Inbound access** section. This step creates a private endpoint on your function app.
+
+ > [!TIP]
+ > To continue interacting with your function app from portal, you'll need to add your local computer to the virtual network. If you don't wish to restrict inbound access, skip this step.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Enable private endpoints** | On | The ability to configure your application with VNet integration at creation appears in the portal after this option is enabled. |
+ | **Private endpoint name** | myInboundPrivateEndpointName | Name that identifies your new function app private endpoint. |
+ | **Inbound subnet** | Create New | This option creates a new subnet for your inbound private endpoint. Multiple private endpoints may be added to a singular subnet. Provide a **Subnet Name**. The **Subnet Address Block** may be left at the default value. Select **Ok**. To learn more about subnet sizing, see [Subnets](functions-networking-options.md#subnets). |
+ | **DNS** | Azure Private DNS Zone | This value indicates which DNS server your private endpoint uses. In most cases if you're working within Azure, Azure Private DNS Zone is the DNS zone you should use as using **Manual** for custom DNS zones have increased complexity. |
+
+ Enter the following settings for the **Outbound access** section. This step integrates your function app with a virtual network on creation. It also exposes options to create private endpoints on your storage account and restrict your storage account from network access on create. When function app is vnet integrated, all outbound traffic by default goes [through the vnet.](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works).
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Enable VNet Integration** | On | This integrates your function app with a VNet on create and direct all outbound traffic through the VNet. |
+ | **Outbound subnet** | Create new | This creates a new subnet for your function app's VNet integration. A function app can only be VNet integrated with an empty subnet. Provide a **Subnet Name**. The **Subnet Address Block** may be left at the default value. If you wish to configure it, please learn more about Subnet sizing here. Select **Ok**. The option to create **Storage private endpoints** is displayed. To use your function app with virtual networks, you need to join it to a subnet. |
+
+ Enter the following settings for the **Storage private endpoint** section. This step creates private endpoints for the blob, queue, file, and table endpoints on your storage account on create. This effectively integrates your storage account with the VNet.
+
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Add storage private endpoint** | On | The ability to configure your application with VNet integration at creation is displayed in the portal after this option is enabled. |
+ | **Private endpoint name** | myInboundPrivateEndpointName | Name that identifies your storage account private endpoint. |
+ | **Private endpoint subnet** | Create New | This creates a new subnet for your inbound private endpoint on the storage account. Multiple private endpoints may be added to a singular subnet. Provide a **Subnet Name**. The **Subnet Address Block** may be left at the default value. If you wish to configure it, please learn more about Subnet sizing here. Select **Ok**. |
+ | **DNS** | Azure Private DNS Zone | This value indicates which DNS server your private endpoint uses. In most cases if you're working within Azure, Azure Private DNS Zone is the DNS zone you should use as using **Manual** for custom DNS zones will have increased complexity. |
1. Select **Next: Monitoring**. On the **Monitoring** page, enter the following settings.
You'll create a .NET function app in the Premium plan because this tutorial uses
1. Select **Review + create** to review the app configuration selections.
-1. On the **Review + create** page, review your settings. Then select **Create** to provision and deploy the function app.
+1. On the **Review + create** page, review your settings. Then select **Create** to create and deploy the function app.
1. In the upper-right corner of the portal, select the **Notifications** icon and watch for the **Deployment succeeded** message.
You'll create a .NET function app in the Premium plan because this tutorial uses
Congratulations! You've successfully created your premium function app.
-## Create Azure resources
-
-Next, you'll create a storage account, a Service Bus, and a virtual network.
-### Create a storage account
-
-Your virtual networks will need a storage account that's separate from the one you created with your function app.
-
-1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-
-1. On the **New** page, search for *storage account*. Then select **Create**.
-
-1. On the **Basics** tab, use the following table to configure the storage account settings. All other settings can use the default values.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Name** | mysecurestorage| The name of the storage account that the private endpoint will be applied to. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
-
-1. Select **Review + create**. After validation finishes, select **Create**.
+> [!NOTE]
+> Some deployments may occassionally fail to create the private endpoints in the storage account with the error 'StorageAccountOperationInProgress'. This failure occurs even though the function app itself gets created successfully. When you encounter such an error, delete the function app and retry the operation. You can instead create the private endpoints on the storage account manually.
### Create a Service Bus
+Next, you create a Service Bus instance that is used to test the functionality of your function app's network capabilities in this tutorial.
+ 1. On the Azure portal menu or the **Home** page, select **Create a resource**. 1. On the **New** page, search for *Service Bus*. Then select **Create**.
Your virtual networks will need a storage account that's separate from the one y
| Setting | Suggested value | Description | | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **Subscription** | Your subscription | The subscription in which your resources are created. |
| **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Namespace name** | myServiceBus| The name of the Service Bus that the private endpoint will be applied to. |
+ | **Namespace name** | myServiceBus| The name of the Service Bus instance for which the private endpoint is enabled. |
| **[Location](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. | | **Pricing tier** | Premium | Choose this tier to use private endpoints with Azure Service Bus. | 1. Select **Review + create**. After validation finishes, select **Create**.
-### Create a virtual network
-
-The Azure resources in this tutorial either integrate with or are placed within a virtual network. You'll use private endpoints to contain network traffic within the virtual network.
-
-The tutorial creates two subnets:
-- **default**: Subnet for private endpoints. Private IP addresses are given from this subnet.-- **functions**: Subnet for Azure Functions virtual network integration. This subnet is delegated to the function app.-
-Create the virtual network to which the function app integrates:
-
-1. On the Azure portal menu or the **Home** page, select **Create a resource**.
-
-1. On the **New** page, search for *virtual network*. Then select **Create**.
-
-1. On the **Basics** tab, use the following table to configure the virtual network settings.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Name** | myVirtualNet| The name of the virtual network to which your function app will connect. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your function app. |
-
-1. On the **IP Addresses** tab, select **Add subnet**. Use the following table to configure the subnet settings.
-
- :::image type="content" source="./media/functions-create-vnet/1-create-vnet-ip-address.png" alt-text="Screenshot of the Create virtual network configuration view.":::
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subnet name** | functions | The name of the subnet to which your function app will connect. |
- | **Subnet address range** | 10.0.1.0/24 | The subnet address range. In the preceding image, notice that the IPv4 address space is 10.0.0.0/16. If the value were 10.1.0.0/16, the recommended subnet address range would be 10.1.1.0/24. |
-
-1. Select **Review + create**. After validation finishes, select **Create**.
-
-## Lock down your storage account
-
-Azure private endpoints are used to connect to specific Azure resources by using a private IP address. This connection ensures that network traffic remains within the chosen virtual network and access is available only for specific resources.
-
-Create the private endpoints for Azure Files Storage, Azure Blob Storage and Azure Table Storage by using your storage account:
-
-1. In your new storage account, in the menu on the left, select **Networking**.
-
-1. On the **Private endpoint connections** tab, select **Private endpoint**.
-
- :::image type="content" source="./media/functions-create-vnet/2-navigate-private-endpoint-store.png" alt-text="Screenshot of how to create private endpoints for the storage account.":::
-
-1. On the **Basics** tab, use the private endpoint settings shown in the following table.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | Choose the resource group you created with your function app. |
- | **Name** | file-endpoint | The name of the private endpoint for files from your storage account. |
- | **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | Choose the region where you created your storage account. |
-
-1. On the **Resource** tab, use the private endpoint settings shown in the following table.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
- | **Resource** | mysecurestorage | The storage account you created. |
- | **Target sub-resource** | file | The private endpoint that will be used for files from the storage account. |
-
-1. On the **Configuration** tab, for the **Subnet** setting, choose **default**.
-
-1. Select **Review + create**. After validation finishes, select **Create**. Resources in the virtual network can now communicate with storage files.
-
-1. Create another private endpoint for blobs. On the **Resources** tab, use the settings shown in the following table. For all other settings, use the same values you used to create the private endpoint for files.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
- | **Name** | blob-endpoint | The name of the private endpoint for blobs from your storage account. |
- | **Resource** | mysecurestorage | The storage account you created. |
- | **Target sub-resource** | blob | The private endpoint that will be used for blobs from the storage account. |
-1. Create another private endpoint for tables. On the **Resources** tab, use the settings shown in the following table. For all other settings, use the same values you used to create the private endpoint for files.
-
- | Setting | Suggested value | Description |
- | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Resource type** | Microsoft.Storage/storageAccounts | The resource type for storage accounts. |
- | **Name** | table-endpoint | The name of the private endpoint for blobs from your storage account. |
- | **Resource** | mysecurestorage | The storage account you created. |
- | **Target sub-resource** | table | The private endpoint that will be used for tables from the storage account. |
-1. After the private endpoints are created, return to the **Firewalls and virtual networks** section of your storage account.
-1. Ensure **Selected networks** is selected. It's not necessary to add an existing virtual network.
-
-Resources in the virtual network can now communicate with the storage account using the private endpoint.
## Lock down your Service Bus Create the private endpoint to lock down your Service Bus:
Create the private endpoint to lock down your Service Bus:
| Setting | Suggested value | Description | | | - | - |
- | **Subscription** | Your subscription | The subscription under which your resources are created. |
+ | **Subscription** | Your subscription | The subscription in which your resources are created. |
| **[Resource group](../azure-resource-manager/management/overview.md)** | myResourceGroup | The resource group you created with your function app. |
- | **Name** | sb-endpoint | The name of the private endpoint for files from your storage account. |
+ | **Name** | sb-endpoint | The name of the private endpoint for the service bus. |
| **[Region](https://azure.microsoft.com/regions/)** | myFunctionRegion | The region where you created your storage account. | 1. On the **Resource** tab, use the private endpoint settings shown in the following table.
Create the private endpoint to lock down your Service Bus:
| **Subscription** | Your subscription | The subscription under which your resources are created. | | **Resource type** | Microsoft.ServiceBus/namespaces | The resource type for the Service Bus. | | **Resource** | myServiceBus | The Service Bus you created earlier in the tutorial. |
- | **Target subresource** | namespace | The private endpoint that will be used for the namespace from the Service Bus. |
+ | **Target subresource** | namespace | The private endpoint that is used for the namespace from the Service Bus. |
-1. On the **Configuration** tab, for the **Subnet** setting, choose **default**.
+1. On the **Virtual Network** tab, for the **Subnet** setting, choose **default**.
1. Select **Review + create**. After validation finishes, select **Create**.
-1. After the private endpoint is created, return to the **Firewalls and virtual networks** section of your Service Bus namespace.
+1. After the private endpoint is created, return to the **Networking** section of your Service Bus namespace and check the **Public Access** tab.
1. Ensure **Selected networks** is selected. 1. Select **+ Add existing virtual network** to add the recently created virtual network. 1. On the **Add networks** tab, use the network settings from the following table:
Create the private endpoint to lock down your Service Bus:
| Setting | Suggested value | Description| ||--|| | **Subscription** | Your subscription | The subscription under which your resources are created. |
- | **Virtual networks** | myVirtualNet | The name of the virtual network to which your function app will connect. |
- | **Subnets** | functions | The name of the subnet to which your function app will connect. |
+ | **Virtual networks** | myVirtualNet | The name of the virtual network to which your function app connects. |
+ | **Subnets** | functions | The name of the subnet to which your function app connects. |
1. Select **Add your client IP address** to give your current client IP access to the namespace. > [!NOTE]
Create the private endpoint to lock down your Service Bus:
Resources in the virtual network can now communicate with the Service Bus using the private endpoint.
-## Create a file share
-
-1. In the storage account you created, in the menu on the left, select **File shares**.
-
-1. Select **+ File shares**. For the purposes of this tutorial, name the file share *files*.
-
- :::image type="content" source="./media/functions-create-vnet/4-create-file-share.png" alt-text="Screenshot of how to create a file share in the storage account.":::
-
-1. Select **Create**.
-
-## Get the storage account connection string
-
-1. In the storage account you created, in the menu on the left, select **Access keys**.
-
-1. Select **Show keys**. Copy and save the connection string of **key1**. You'll need this connection string when you configure the app settings.
-
- :::image type="content" source="./media/functions-create-vnet/5-get-store-connection-string.png" alt-text="Screenshot of how to get a storage account connection string.":::
- ## Create a queue
-Create the queue where your Azure Functions Service Bus trigger will get events:
+Create the queue where your Azure Functions Service Bus trigger gets events:
1. In your Service Bus, in the menu on the left, select **Queues**.
Create the queue where your Azure Functions Service Bus trigger will get events:
1. In your Service Bus, in the menu on the left, select **Shared access policies**.
-1. Select **RootManageSharedAccessKey**. Copy and save the **Primary Connection String**. You'll need this connection string when you configure the app settings.
+1. Select **RootManageSharedAccessKey**. Copy and save the **Primary Connection String**. You need this connection string when you configure the app settings.
:::image type="content" source="./media/functions-create-vnet/7-get-service-bus-connection-string.png" alt-text="Screenshot of how to get a Service Bus connection string.":::
-## Integrate the function app
-
-To use your function app with virtual networks, you need to join it to a subnet. You'll use a specific subnet for the Azure Functions virtual network integration. You'll use the default subnet for other private endpoints you create in this tutorial.
-
-1. In your function app, in the menu on the left, select **Networking**.
-
-1. Under **VNet Integration**, select **Click here to configure**.
-
- :::image type="content" source="./media/functions-create-vnet/8-connect-app-vnet.png" alt-text="Screenshot of how to go to virtual network integration.":::
-
-1. Select **Add VNet**.
-
-1. Under **Virtual Network**, select the virtual network you created earlier.
-
-1. Select the **functions** subnet you created earlier. Select **OK**. Your function app is now integrated with your virtual network!
-
- If the virtual network and function app are in different subscriptions, you need to first provide **Contributor** access to the service principal **Microsoft Azure App Service** on the virtual network.
-
- :::image type="content" source="./media/functions-create-vnet/9-connect-app-subnet.png" alt-text="Screenshot of how to connect a function app to a subnet.":::
-
-1. Ensure that the **Route All** configuration setting is set to **Enabled**.
-
- :::image type="content" source="./media/functions-create-vnet/10-enable-route-all.png" alt-text="Screenshot of how to enable route all functionality.":::
- ## Configure your function app settings 1. In your function app, in the menu on the left, select **Configuration**.
-1. To use your function app with virtual networks, update the app settings shown in the following table. To add or edit a setting, select **+ New application setting** or the **Edit** icon in the rightmost column of the app settings table. When you finish, select **Save**.
+1. To use your function app with virtual networks and service bus, update the app settings shown in the following table. To add or edit a setting, select **+ New application setting** or the **Edit** icon in the rightmost column of the app settings table. When you finish, select **Save**.
| Setting | Suggested value | Description | | | - | - |
- | **AzureWebJobsStorage** | mysecurestorageConnectionString | The connection string of the storage account you created. This storage connection string is from the [Get the storage account connection string](#get-the-storage-account-connection-string) section. This setting allows your function app to use the secure storage account for normal operations at runtime. |
- | **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING** | mysecurestorageConnectionString | The connection string of the storage account you created. This setting allows your function app to use the secure storage account for Azure Files, which is used during deployment. |
- | **WEBSITE_CONTENTSHARE** | files | The name of the file share you created in the storage account. Use this setting with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. |
| **SERVICEBUS_CONNECTION** | myServiceBusConnectionString | Create this app setting for the connection string of your Service Bus. This storage connection string is from the [Get a Service Bus connection string](#get-a-service-bus-connection-string) section.| | **WEBSITE_CONTENTOVERVNET** | 1 | Create this app setting. A value of 1 enables your function app to scale when your storage account is restricted to a virtual network. |
-1. In the **Configuration** view, select the **Function runtime settings** tab.
-
-1. Set **Runtime Scale Monitoring** to **On**. Then select **Save**. Runtime-driven scaling allows you to connect non-HTTP trigger functions to services that run inside your virtual network.
+1. Since you're using an Elastic Premium hosting plan, In the **Configuration** view, select the **Function runtime settings** tab. Set **Runtime Scale Monitoring** to **On**. Then select **Save**. Runtime-driven scaling allows you to connect non-HTTP trigger functions to services that run inside your virtual network.
:::image type="content" source="./media/functions-create-vnet/11-enable-runtime-scaling.png" alt-text="Screenshot of how to enable runtime-driven scaling for Azure Functions.":::
+> [!NOTE]
+> Runtime scaling isn't needed for function apps hosted in a Dedicated App Service plan.
+ ## Deploy a Service Bus trigger and HTTP trigger > [!NOTE]
-> Enabling Private Endpoints on a Function App also makes the Source Control Manager (SCM) site publicly inaccessible. The following instructions give deployment directions using the Deployment Center within the Function App. Alternatively, use [zip deploy](functions-deployment-technologies.md#zip-deploy) or [self-hosted](/azure/devops/pipelines/agents/docker) agents that are deployed into a subnet on the virtual network.
+> Enabling private endpoints on a function app also makes the Source Control Manager (SCM) site publicly inaccessible. The following instructions give deployment directions using the Deployment Center within the function app. Alternatively, use [zip deploy](functions-deployment-technologies.md#zip-deploy) or [self-hosted](/azure/devops/pipelines/agents/docker) agents that are deployed into a subnet on the virtual network.
1. In GitHub, go to the following sample repository. It contains a function app and two functions, an HTTP trigger, and a Service Bus queue trigger.
To use your function app with virtual networks, you need to join it to a subnet.
| **Repository** | functions-vnet-tutorial | The repository forked from https://github.com/Azure-Samples/functions-vnet-tutorial. | | **Branch** | main | The main branch of the repository you created. | | **Runtime stack** | .NET | The sample code is in C#. |
- | **Version** | v4.0 | The runtime version. |
+ | **Version** | .NET Core 3.1 | The runtime version. |
1. Select **Save**.
To use your function app with virtual networks, you need to join it to a subnet.
Congratulations! You've successfully deployed your sample function app.
-## Lock down your function app
-
-Now create the private endpoint to lock down your function app. This private endpoint will connect your function app privately and securely to your virtual network by using a private IP address.
-
-For more information, see the [private endpoint documentation](../private-link/private-endpoint-overview.md).
-
-1. In your function app, in the menu on the left, select **Networking**.
-
-1. Under **Private Endpoint Connections**, select **Configure your private endpoint connections**.
-
- :::image type="content" source="./media/functions-create-vnet/14-navigate-app-private-endpoint.png" alt-text="Screenshot of how to navigate to a function app private endpoint.":::
-
-1. Select **Add**.
-
-1. On the pane that opens, use the following private endpoint settings:
-
- :::image type="content" source="./media/functions-create-vnet/15-create-app-private-endpoint.png" alt-text="Screenshot of how to create a function app private endpoint. The name is functionapp-endpoint. The subscription is 'Private Test Sub CACHHAI'. The virtual network is MyVirtualNet-tutorial. The subnet is default.":::
-
-1. Select **OK** to add the private endpoint.
-
-Congratulations! You've successfully secured your function app, Service Bus, and storage account by adding private endpoints!
- ### Test your locked-down function app 1. In your function app, in the menu on the left, select **Functions**.
Congratulations! You've successfully secured your function app, Service Bus, and
1. In the menu on the left, select **Monitor**.
-You'll see that you can't monitor your app. Your browser doesn't have access to the virtual network, so it can't directly access resources within the virtual network.
+You see that you can't monitor your app. Your browser doesn't have access to the virtual network, so it can't directly access resources within the virtual network.
Here's an alternative way to monitor your function by using Application Insights:
In this tutorial, you created a Premium function app, storage account, and Servi
Use the following links to learn more Azure Functions networking options and private endpoints:
+- [How to configure Azure Functions with a virtual network](./configure-networking-how-to.md)
- [Networking options in Azure Functions](./functions-networking-options.md) - [Azure Functions Premium plan](./functions-premium-plan.md) - [Service Bus private endpoints](../service-bus-messaging/private-link-service.md)
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
This article describes the networking features available across the hosting opti
The hosting models have different levels of network isolation available. Choosing the correct one helps you meet your network isolation requirements.
-You can host function apps in a couple of ways:
+You can host function apps in several ways:
* You can choose from plan options that run on a multitenant infrastructure, with various levels of virtual network connectivity and scaling options: * The [Consumption plan](consumption-plan.md) scales dynamically in response to load and offers minimal network isolation options.
You can host function apps in a couple of ways:
[!INCLUDE [functions-networking-features](../../includes/functions-networking-features.md)]
-## Quick start resources
+## Quickstart resources
Use the following resources to quickly get started with Azure Functions networking scenarios. These resources are referenced throughout the article. * ARM, Bicep, and Terraform templates:
- * [Private HTTP Triggered Function App](https://github.com/Azure-Samples/function-app-with-private-http-endpoint)
- * [Private Event Hubs Triggered Function App](https://github.com/Azure-Samples/function-app-with-private-eventhub)
+ * [Private HTTP triggered function app](https://github.com/Azure-Samples/function-app-with-private-http-endpoint)
+ * [Private Event Hubs triggered function app](https://github.com/Azure-Samples/function-app-with-private-eventhub)
* ARM templates only:
- * [Function App with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
- * [Azure Function App with Virtual Network Integration](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-vnet-integration).
+ * [Function app with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
+ * [Azure function app with Virtual Network Integration](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-vnet-integration).
* Tutorials:
+ * [Integrate Azure Functions with an Azure virtual network by using private endpoints](functions-create-vnet.md)
* [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). * [Control Azure Functions outbound IP with an Azure virtual network NAT gateway](functions-how-to-use-nat-gateway.md).
To call other services that have a private endpoint connection, such as storage
### Service endpoints
-Using service endpoints, you can restrict a number of Azure services to selected virtual network subnets to provide a higher level of security. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following:
+Using service endpoints, you can restrict many Azure services to selected virtual network subnets to provide a higher level of security. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following:
1. Configure regional virtual network integration with your function app to connect to a specific subnet. 1. Go to the destination service and configure service endpoints against the integration subnet.
To learn more, see [Virtual network service endpoints](../virtual-network/virtua
To restrict access to a specific subnet, create a restriction rule with a **Virtual Network** type. You can then select the subscription, virtual network, and subnet that you want to allow or deny access to.
-If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they'll be automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
+If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they are automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
-If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app will be configured for service endpoints in anticipation of having them enabled later on the subnet.
+If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app is configured for service endpoints in anticipation of having them enabled later on the subnet.
![Screenshot of the "Add IP Restriction" pane with the Virtual Network type selected.](../app-service/media/app-service-ip-restrictions/access-restrictions-vnet-add.png)
To learn how to set up virtual network integration, see [Enable virtual network
### Enable virtual network integration
-1. Go to the **Networking** blade in the Function App portal. Under **VNet Integration**, select **Click here to configure**.
+1. In your function app in the [Azure portal](https://portal.azure.com), select **Networking**, then under **VNet Integration** select **Click here to configure**.
1. Select **Add VNet**.
To learn how to set up virtual network integration, see [Enable virtual network
:::image type="content" source="./media/functions-networking-options/vnet-int-add-vnet-function-app.png" alt-text="Select the VNet"::: * The Functions Premium Plan only supports regional virtual network integration. If the virtual network is in the same region, either create a new subnet or select an empty, pre-existing subnet.
- * To select a virtual network in another region, you must have a virtual network gateway provisioned with point to site enabled. Virtual network integration across regions is only supported for Dedicated plans, but global peerings will work with regional virtual network integration.
-During the integration, your app is restarted. When integration is finished, you'll see details on the virtual network you're integrated with. By default, Route All will be enabled, and all traffic will be routed into your virtual network.
+ * To select a virtual network in another region, you must have a virtual network gateway provisioned with point to site enabled. Virtual network integration across regions is only supported for Dedicated plans, but global peerings work with regional virtual network integration.
+
+During the integration, your app is restarted. When integration is finished, you see details on the virtual network you're integrated with. By default, Route All is enabled, and all traffic is routed into your virtual network.
If you wish for only your private traffic ([RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) to be routed, please follow the steps in the [app service documentation](../app-service/overview-vnet-integration.md#application-routing).
There are some limitations with using virtual network:
Virtual network integration depends on a dedicated subnet. When you provision a subnet, the Azure subnet loses five IPs from the start. One address is used from the integration subnet for each plan instance. When you scale your app to four instances, then four addresses are used.
-When you scale up or down in size, the required address space is doubled for a short period of time. This affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the impact this has on horizontal scale:
+When you scale up or down in size, the required address space is doubled for a short period of time. This affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the effect this has on horizontal scale:
| CIDR block size | Max available addresses | Max horizontal scale (instances)<sup>*</sup> | |--|-||
When you scale up or down in size, the required address space is doubled for a s
| /27 | 27 | 13 | | /26 | 59 | 29 |
-<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point.
+<sup>*</sup>Assumes that you need to scale up or down in either size or SKU at some point.
Since subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity for Functions Premium plans, you should use a /24 with 256 addresses for Windows and a /26 with 64 addresses for Linux. When creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /24 and /26 is required for Windows and Linux respectively.
After your app integrates with your virtual network, it uses the same DNS server
## Restrict your storage account to a virtual network > [!NOTE]
-> To quickly deploy a function app with private endpoints enabled on the storage account, please refer to the following template: [Function App with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
+> To quickly deploy a function app with private endpoints enabled on the storage account, please refer to the following template: [Function app with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints.
To learn more about networking and Azure Functions:
<!--Links--> [VNETnsg]: ../virtual-network/network-security-groups-overview.md
-[privateendpoints]: ../app-service/networking/private-endpoint.md
+[privateendpoints]: ../app-service/networking/private-endpoint.md
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Azure Maps is a collection of geospatial services and SDKs that use fresh mappin
* Various routing options; such as point-to-point, multipoint, multipoint optimization, isochrone, electric vehicle, commercial vehicle, traffic influenced, and matrix routing. * Traffic flow view and incidents view, for applications that require real-time traffic information. * Time zone and Geolocation services.
-* Elevation services with Digital Elevation Model
* Geofencing service and mapping data storage, with location information hosted in Azure. * Location intelligence through geospatial analytics.
Maps Creator provides the following
* [Wayfinding service] (preview). Use the [wayfinding API] to generate a path between two points within a facility. Use the [routeset API] to create the data that the wayfinding service needs to generate paths.
-### Elevation service
-
-The Azure Maps Elevation service is a web service that developers can use to retrieve elevation data from anywhere on the EarthΓÇÖs surface.
-
-The Elevation service allows you to retrieve elevation data in two formats:
-
-* **GeoTIFF raster format**. Use the [Render V2-Get Map Tile API](/rest/api/maps/renderv2) to retrieve elevation data in tile format.
-
-* **GeoJSON format**. Use the [Elevation APIs](/rest/api/maps/elevation) to request sampled elevation data along paths, within a defined bounding box, or at specific coordinates.
-- ## Programming model Azure Maps is built for mobility and can help you develop cross-platform applications. It uses a programming model that's language agnostic and supports JSON output through [REST APIs](/rest/api/maps/).
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
Below are the QPS usage limits for each Azure Maps service by Pricing Tier.
| Creator - Alias, TilesetDetails | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, WFS | 50 | Not Available | Not Available | | Data service | 50 | 50 | Not Available |
-| Elevation service | 50 | 50 | Not Available |
+| Elevation service ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) | 50 | 50 | Not Available |
| Geolocation service | 50 | 50 | 50 |
-| Render service - Contour tiles, Digital Elevation Model (DEM) tiles and Customer tiles | 50 | 50 | Not Available |
+| Render service - Contour tiles, Digital Elevation Model (DEM) tiles ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) and Customer tiles | 50 | 50 | Not Available |
| Render service - Traffic tiles and Static maps | 50 | 50 | 50 | | Render service - Road tiles | 500 | 500 | 50 | | Render service - Satellite tiles | 250 | 250 | Not Available |
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
Below is the manifest file for the sample drawing package. Go to the [Sample dra
You can convert uploaded drawing packages into map data by using the Azure Maps [Conversion service]. This article describes the drawing package requirements for the Conversion API. To view a sample package, you can download the [sample drawing package v2].
-For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide].
+For a guide on how to prepare your drawing package, see [Drawing Package Guide].
## Changes and Revisions
The JSON in this example shows the manifest file for the sample drawing package.
## Next steps
-For a guide on how to prepare your drawing package, see [Conversion Drawing Package Guide].
+For a guide on how to prepare your drawing package, see the drawing package guide.
> [!div class="nextstepaction"] > [Drawing Package Guide]
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
# Java REST SDK Developers Guide (preview)
-The Azure Maps Java SDK can be integrated with Java applications and libraries to build maps-related and location-aware applications. The Azure Maps Java SDK contains APIs for Search, Route, Render, Elevation, Geolocation, Traffic, Timezone, and Weather. These APIs support operations such as searching for an address, routing between different coordinates, obtaining the geo-location of a specific IP address etc.
+The Azure Maps Java SDK can be integrated with Java applications and libraries to build maps-related and location-aware applications. The Azure Maps Java SDK contains APIs for Search, Route, Render, Geolocation, Traffic, Timezone, and Weather. These APIs support operations such as searching for an address, routing between different coordinates, obtaining the geo-location of a specific IP address etc.
> [!NOTE] > Azure Maps Java SDK is baselined on Java 8, with testing and forward support up until the latest Java long-term support release (currently Java 18). For the list of Java versions for download, see [Java Standard Versions].
New-Item demo.java
| [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] | | [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] | | [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] |
-| [Elevation][java elevation readme] | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
+| [Elevation][java elevation readme] ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023))| [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
## Create and authenticate a MapsSearchClient
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-elevation-data.md
Title: Request elevation data using the Azure Maps Elevation service description: Learn how to request elevation data using the Azure Maps Elevation service.--++ Last updated 10/28/2021
# Request elevation data using the Azure Maps Elevation service
+> [!IMPORTANT]
+> The Azure Maps Elevation services and Render V2 DEM tiles have been retired and will no longer be available or supported after May 5, 2023. No other Azure Maps API, services or tilesets are affected. For more information, see [Elevation Services Retirement](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023).
+ The Azure Maps [Elevation service](/rest/api/maps/elevation) provides APIs to query elevation data anywhere on the earth's surface. You can request sampled elevation data along paths, within a defined bounding box, or at specific coordinates. Also, you can use the [Render V2 - Get Map Tile API](/rest/api/maps/renderv2) to retrieve elevation data in tile format. The tiles are delivered in GeoTIFF raster format. This article describes how to use Azure Maps Elevation service and the Get Map Tile API to request elevation data. The elevation data can be requested in both GeoJSON and GeoTiff formats. ## Prerequisites
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
# Tutorial: Migrate web service from Bing Maps
-Both Azure and Bing Maps provide access to spatial APIs through REST web services. The API interfaces for these platforms perform similar functionalities but use different naming conventions and response objects. In this tutorial, you will learn how to:
+Both Azure and Bing Maps provide access to spatial APIs through REST web services. The API interfaces for these platforms perform similar functionalities but use different naming conventions and response objects. This tutorial demonstrates how to:
> * Forward and reverse geocoding > * Search for points of interest
Both Azure and Bing Maps provide access to spatial APIs through REST web service
The following table provides the Azure Maps service APIs that provide similar functionality to the listed Bing Maps service APIs.
-| Bing Maps service API | Azure Maps service API |
-||--|
-| Autosuggest | [Search](/rest/api/maps/search) |
-| Directions (including truck) | [Route directions](/rest/api/maps/route/getroutedirections) |
-| Distance Matrix | [Route Matrix](/rest/api/maps/route/postroutematrixpreview) |
-| Imagery ΓÇô Static Map | [Render](/rest/api/maps/render/getmapimage) |
-| Isochrones | [Route Range](/rest/api/maps/route/getrouterange) |
-| Local Insights | [Search](/rest/api/maps/search) + [Route Range](/rest/api/maps/route/getrouterange) |
-| Local Search | [Search](/rest/api/maps/search) |
-| Location Recognition (POIs) | [Search](/rest/api/maps/search) |
-| Locations (forward/reverse geocoding) | [Search](/rest/api/maps/search) |
-| Snap to Road | [POST Route directions](/rest/api/maps/route/postroutedirections) |
-| Spatial Data Services (SDS) | [Search](/rest/api/maps/search) + [Route](/rest/api/maps/route) + other Azure Services |
-| Time Zone | [Time Zone](/rest/api/maps/timezone) |
-| Traffic Incidents | [Traffic Incident Details](/rest/api/maps/traffic/gettrafficincidentdetail) |
-| Elevation | [Elevation](/rest/api/maps/elevation)
-
-The following service APIs are not currently available in Azure Maps:
+| Bing Maps service API | Azure Maps service API |
+||-|
+| Autosuggest | [Search] |
+| Directions (including truck) | [Route directions] |
+| Distance Matrix | [Route Matrix] |
+| Imagery ΓÇô Static Map | [Render] |
+| Isochrones | [Route Range] |
+| Local Insights | [Search] + [Route Range] |
+| Local Search | [Search] |
+| Location Recognition (POIs) | [Search] |
+| Locations (forward/reverse geocoding) | [Search] |
+| Snap to Road | [POST Route directions] |
+| Spatial Data Services (SDS) | [Search] + [Route] + other Azure Services |
+| Time Zone | [Time Zone] |
+| Traffic Incidents | [Traffic Incident Details] |
+| Elevations | <sup>1</sup> |
+
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+
+The following service APIs aren't currently available in Azure Maps:
* Optimized Itinerary Routes - Planned. Azure Maps Route API does support traveling salesmen optimization for a single vehicle. * Imagery Metadata ΓÇô Primarily used for getting tile URLs in Bing Maps. Azure Maps has a standalone service for directly accessing map tiles.
+* Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md)
-Azure Maps has several additional REST web services that may be of interest;
+Azure Maps also has these REST web
-* [Azure Maps Creator](./creator-indoor-maps.md) ΓÇô Create a custom private digital twin of buildings and spaces.
-* [Spatial operations](/rest/api/maps/spatial) ΓÇô Offload complex spatial calculations and operations, such as geofencing, to a service.
-* [Map Tiles](/rest/api/maps/render/getmaptile) ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles.
-* [Batch routing](/rest/api/maps/route/postroutedirectionsbatchpreview) ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing.
-* [Traffic](/rest/api/maps/traffic) Flow ΓÇô Access real-time traffic flow data as both raster and vector tiles.
-* [Geolocation API](/rest/api/maps/geolocation/get-ip-to-location) ΓÇô Get the location of an IP address.
-* [Weather services](/rest/api/maps/weather) ΓÇô Gain access to real-time and forecast weather data.
+* [Azure Maps Creator] ΓÇô Create a custom private digital twin of buildings and spaces.
+* [Spatial operations] ΓÇô Offload complex spatial calculations and operations, such as geofencing, to a service.
+* [Map Tiles] ΓÇô Access road and imagery tiles from Azure Maps as raster and vector tiles.
+* [Batch routing] ΓÇô Allows up to 1,000 route requests to be made in a single batch over a period of time. Routes are calculated in parallel on the server for faster processing.
+* [Traffic] Flow ΓÇô Access real-time traffic flow data as both raster and vector tiles.
+* [Geolocation API] ΓÇô Get the location of an IP address.
+* [Weather services] ΓÇô Gain access to real-time and forecast weather data.
Be sure to also review the following best practices guides:
-* [Best practices for search](./how-to-use-best-practices-for-search.md)
-* [Best practices for routing](./how-to-use-best-practices-for-routing.md)
+* [Best practices for Azure Maps Search service]
+* [Best practices for Azure Maps Route service]
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+If you don't have an Azure subscription, create a [free account] before you begin.
* An [Azure Maps account] * A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
## Geocoding addresses Geocoding is the process of converting an address (like `"1 Microsoft way, Redmond, WA"`) into a coordinate (like longitude: -122.1298, latitude: 47.64005). Coordinates are then often used to position a pushpin on a map or center a map.
-Azure Maps provides several methods for geocoding addresses;
+Azure Maps provides several methods for geocoding addresses:
-* [Free-form address geocoding](/rest/api/maps/search/getsearchaddress): Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding](/rest/api/maps/search/getsearchaddressstructured): Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* [Batch address geocoding](/rest/api/maps/search/postsearchaddressbatchpreview): Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses will be geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
-* [Fuzzy search](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Fuzzy batch search](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
+* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following tables cross-reference the Bing Maps API parameters with the comparable API parameters in Azure Maps for structured and free-form address geocoding. **Location by Address (structured address)**
-| Bing Maps API parameter | Comparable Azure Maps API parameter |
-|--|--|
-| `addressLine` | `streetNumber`, `streetName` or `crossStreet` |
-| `adminDistrict` | `countrySubdivision` |
-| `countryRegion` | `country` and `countryCode` |
-| `locality` | `municipality` or `municipalitySubdivision` |
-| `postalCode` | `postalCode` |
-| `maxResults` (`maxRes`) | `limit` |
+| Bing Maps API parameter | Comparable Azure Maps API parameter |
+|-|--|
+| `addressLine` | `streetNumber`, `streetName` or `crossStreet` |
+| `adminDistrict` | `countrySubdivision` |
+| `countryRegion` | `country` and `countryCode` |
+| `locality` | `municipality` or `municipalitySubdivision` |
+| `postalCode` | `postalCode` |
+| `maxResults` (`maxRes`) | `limit` |
| `includeNeighborhood` (`inclnb`) | N/A ΓÇô Always returned by Azure Maps if available. | | `include` (`incl`) | N/A ΓÇô Country ISO2 Code always returned by Azure Maps. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-Azure Maps also supports;
+Azure Maps also supports:
* `countrySecondarySubdivision` ΓÇô County, districts
-* `countryTertiarySubdivision` - Named areas; boroughs, cantons, communes
+* `countryTertiarySubdivision` - Named areas, boroughs, cantons, communes
* `ofs` - Page through the results in combination with `maxResults` parameter. **Location by Query (free-form address string)**
-| Bing Maps API parameter | Comparable Azure Maps API parameter |
-|--||
-| `query` | `query` |
-| `maxResults` (`maxRes`) | `limit` |
+| Bing Maps API parameter | Comparable Azure Maps API parameter |
+|-||
+| `query` | `query` |
+| `maxResults` (`maxRes`) | `limit` |
| `includeNeighborhood` (`inclnb`) | N/A ΓÇô Always returned by Azure Maps if available. | | `include` (`incl`) | N/A ΓÇô Country ISO2 Code always returned by Azure Maps. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-Azure Maps also supports;
+Azure Maps also supports:
-* `typeahead` - Species if the query will be interpreted as a partial input and the search will enter predictive mode (autosuggest/autocomplete).
+* `typeahead` - Specifies if the query is interpreted as a partial input and the search enters predictive mode (autosuggest/autocomplete).
* `countrySet` ΓÇô A comma-separated list of ISO2 countries codes in which to limit the search to. * `lat`/`lon`, `topLeft`/`btmRight`, `radius` ΓÇô Specify user location and area to make the results more locally relevant. * `ofs` - Page through the results in combination with `maxResults` parameter.
-An example of how to use the search service is documented [here](./how-to-search-for-address.md). Be sure to review the [best practices for search](./how-to-use-best-practices-for-search.md) documentation.
+For more information on using the search service, see [Search for a location using Azure Maps Search services] and [Best practices for Azure Maps Search service].
## Reverse geocode a coordinate (Find a Location by Point) Reverse geocoding is the process of converting geographic coordinates (like longitude: -122.1298, latitude: 47.64005) into its approximate address (like `"1 Microsoft way, Redmond, WA"`).
-Azure Maps provides several reverse geocoding methods;
+Azure Maps provides several reverse geocoding methods:
-* [Address reverse geocoder](/rest/api/maps/search/getsearchaddressreverse): Specify a single geographic coordinate to get its approximate address and process the request immediately.
-* [Cross street reverse geocoder](/rest/api/maps/search/getsearchaddressreversecrossstreet): Specify a single geographic coordinate to get nearby cross street information (for example, 1st & main) and process the request immediately.
-* [Batch address reverse geocoder](/rest/api/maps/search/postsearchaddressreversebatchpreview): Create a request containing up to 10,000 coordinates and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [Address reverse geocoder]: Specify a single geographic coordinate to get its approximate address and process the request immediately.
+* [Cross street reverse geocoder]: Specify a single geographic coordinate to get nearby cross street information (for example, 1st & main) and process the request immediately.
+* [Batch address reverse geocoder]: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
-| Bing Maps API parameter | Comparable Azure Maps API parameter |
-|--|-|
-| `point` | `query` |
-| `includeEntityTypes` | `entityType` ΓÇô See entity type comparison table below. |
-| `includeNeighborhood` (`inclnb`) | N/A ΓÇô Always returned by Azure Maps if available. |
-| `include` (`incl`) | N/A ΓÇô Country ISO2 Code always returned by Azure Maps. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| Bing Maps API parameter | Comparable Azure Maps API parameter |
+|--|-|
+| `point` | `query` |
+| `includeEntityTypes` | `entityType` ΓÇô See entity type comparison table below.|
+| `includeNeighborhood` (`inclnb`) | N/A ΓÇô Always returned by Azure Maps if available. |
+| `include` (`incl`) | N/A ΓÇô Country ISO2 Code always returned by Azure Maps.|
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-Be sure to review the [best practices for search](./how-to-use-best-practices-for-search.md) documentation.
+For more information on searching in Azure Maps, see [Best practices for Azure Maps Search service].
-The Azure Maps reverse geocoding API has some additional features not available in Bing Maps that might be useful to integrate when migrating your app:
+The Azure Maps reverse geocoding API has features not available in Bing Maps that might be useful to integrate when migrating your app:
* Retrieve speed limit data.
-* Retrieve road use information; local road, arterial, limited access, ramp, etc.
+* Retrieve road use information, local road, arterial, limited access, ramp, etc.
* The side of street the coordinate falls on. **Entity type comparison table** The following table cross references the Bing Maps entity type values to the equivalent property names in Azure Maps.
-| Bing Maps Entity Type | Comparable Azure Maps Entity type | Description |
-|--|-|--|
-| `Address` | | *Address* |
-| `Neighborhood` | `Neighbourhood` | *Neighborhood* |
-| `PopulatedPlace` | `Municipality` or `MunicipalitySubdivision` | *City*, *Town or Sub*, or *Super City* |
-| `Postcode1` | `PostalCodeArea` | *Postal Code* or *Zip Code* |
-| `AdminDivision1` | `CountrySubdivision` | *State* or *Province* |
-| `AdminDivision2` | `CountrySecondarySubdivison` | *County* or *districts* |
-| `CountryRegion` | `Country` | *Country name* |
-| | `CountryTertiarySubdivision` | *Boroughs*, *Cantons*, *Communes* |
+| Bing Maps Entity Type | Comparable Azure Maps Entity type | Description |
+|--||-|
+| `Address` | | *Address* |
+| `Neighborhood` | `Neighbourhood` | *Neighborhood* |
+| `PopulatedPlace` | `Municipality` or `MunicipalitySubdivision` | *City*, *Town or Sub*, or *Super City* |
+| `Postcode1` | `PostalCodeArea` | *Postal Code* or *Zip Code* |
+| `AdminDivision1` | `CountrySubdivision` | *State* or *Province* |
+| `AdminDivision2` | `CountrySecondarySubdivison` | *County* or *districts* |
+| `CountryRegion` | `Country` | *Country name* |
+| | `CountryTertiarySubdivision` | *Boroughs*, *Cantons*, *Communes* |
## Get location suggestions (Autosuggest)
-Several of the Azure Maps search APIΓÇÖs support predictive mode that can be used for autosuggest scenarios. The Azure Maps [fuzzy search](/rest/api/maps/search/getsearchfuzzy) API is the most like the Bing Maps Autosuggest API. The following APIΓÇÖs also support predictive mode, add `&typeahead=true` to the query;
+Several of the Azure Maps search APIΓÇÖs support predictive mode that can be used for autosuggest scenarios. The Azure Maps [fuzzy search] API is the most like the Bing Maps Autosuggest API. The following APIs also support predictive mode, add `&typeahead=true` to the query:
-* [Free-form address geocoding](/rest/api/maps/search/getsearchaddress): Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Fuzzy search](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [POI search](/rest/api/maps/search/getsearchpoi): Search for points of interests by name. For example; `"starbucks"`.
-* [POI category search](/rest/api/maps/search/getsearchpoicategory): Search for points of interests by category. For example; "restaurant".
+* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [POI search]: Search for points of interests by name. For example, `"starbucks"`.
+* [POI category search]: Search for points of interests by category. For example, "restaurant".
## Calculate routes and directions
-Azure Maps can be used to calculate routes and directions. Azure Maps has many of the same functionalities as the Bing Maps routing service, such as;
+Azure Maps can be used to calculate routes and directions. Azure Maps has many of the same functionalities as the Bing Maps routing service, such as:
* arrival and departure times * real-time and predictive based traffic routes
-* different modes of transportation; driving, walking, truck
+* different modes of transportation, driving, walking, truck
* waypoint order optimization (traveling salesmen) > [!NOTE] > Azure Maps requires all waypoints to be coordinates. Addresses will need to be geocoded first.
-The Azure Maps routing service provides the following APIs for calculating routes;
+The Azure Maps routing service provides the following APIs for calculating routes:
-* [Calculate route](/rest/api/maps/route/getroutedirections): Calculate a route and have the request processed immediately. This API supports both GET and POST requests. POST requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues.
-* [Batch route](/rest/api/maps/route/postroutedirectionsbatchpreview): Create a request containing up to 1,000 route request and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [Calculate route]: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesnΓÇÖt become too long and cause issues.
+* [Batch route]: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
-| Bing Maps API parameter | Comparable Azure Maps API parameter |
-|||
-| `avoid` | `avoid` |
-| `dateTime` (`dt`) | `departAt` or `arriveAt` |
-| `distanceBeforeFirstTurn` (`dbft`) | N/A |
-| `distanceUnit` (`du`) | N/A ΓÇô Azure Maps only uses the metric system. |
-| `heading` (`hd`) | `vehicleHeading` |
-| `maxSolutions` (`maxSolns`) | `maxAlternatives`, `alternativeType`, `minDeviationDistance`, and `minDeviationTime` |
-| `optimize` (`optwz`) | `routeType` and `traffic` |
-| `optimizeWaypoints` (`optWp`) | `computeBestOrder` |
-| `routeAttributes` (`ra`) | `instructionsType` |
-| `routePathOutput` (`rpo`) | `routeRepresentation` |
-| `timeType` (`tt`) | `departAt` or `arriveAt` |
-| `tolerances` (`tl`) | N/A |
-| `travelMode` | `travelMode` |
-| `waypoint.n` (`wp.n`) or `viaWaypoint.n` (`vwp.n`) | `query` – coordinates in the format `lat0,lon0:lat1,lon1….` |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| Bing Maps API parameter | Comparable Azure Maps API parameter |
+|-||
+| `avoid` | `avoid` |
+| `dateTime` (`dt`) | `departAt` or `arriveAt` |
+| `distanceBeforeFirstTurn` (`dbft`) | N/A |
+| `distanceUnit` (`du`) | N/A ΓÇô Azure Maps only uses the metric system. |
+| `heading` (`hd`) | `vehicleHeading` |
+| `maxSolutions` (`maxSolns`) | `maxAlternatives`, `alternativeType`, `minDeviationDistance`, and `minDeviationTime` |
+| `optimize` (`optwz`) | `routeType` and `traffic` |
+| `optimizeWaypoints` (`optWp`) | `computeBestOrder` |
+| `routeAttributes` (`ra`) | `instructionsType` |
+| `routePathOutput` (`rpo`) | `routeRepresentation` |
+| `timeType` (`tt`) | `departAt` or `arriveAt` |
+| `tolerances` (`tl`) | N/A |
+| `travelMode` | `travelMode` |
+| `waypoint.n` (`wp.n`) or `viaWaypoint.n` (`vwp.n`) | `query` – coordinates in the format `lat0,lon0:lat1,lon1….` |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
The Azure Maps routing API also supports truck routing within the same API. The following table cross-references the additional Bing Maps truck routing parameters with the comparable API parameters in Azure Maps.
The Azure Maps routing API also supports truck routing within the same API. The
> [!TIP] > By default, the Azure Maps route API only returns a summary (distance and times) and the coordinates for the route path. Use the `instructionsType` parameter to retrieve turn-by-turn instructions. The `routeRepresentation` parameter can be used to filter out the summary and route path.
-Be sure to also review the [Best practices for routing](./how-to-use-best-practices-for-routing.md) documentation.
+For more information on the Azure Maps route API, see [Best practices for Azure Maps Route service].
-The Azure Maps routing API has many additional features not available in Bing Maps that might be useful to integrate when migrating your app:
+The Azure Maps routing API has features not available in Bing Maps that might be useful to integrate when migrating your app:
* Support for route type: shortest, fastest, trilling, and most fuel efficient.
-* Support for additional travel modes: bicycle, bus, motorcycle, taxi, truck, and van.
+* Support for more travel modes: bicycle, bus, motorcycle, taxi, truck, and van.
* Support for 150 waypoints.
-* Compute multiple travel times in a single request; historic traffic, live traffic, no traffic.
+* Compute multiple travel times in a single request, historic traffic, live traffic, no traffic.
* Avoid additional road types: carpool roads, unpaved roads, already used roads. * Engine specification-based routing. Calculate routes for combustion or electric vehicles based on their remaining fuel/charge and engine specifications. * Specify maximum vehicle speed.
There are several ways to snap coordinates to roads in Azure Maps.
**Using the route direction API to snap coordinates**
-Azure Maps can snap coordinates to roads by using the [route directions](/rest/api/maps/route/postroutedirections) API. This service can be used to reconstruct a logical route between a set of coordinates and is comparable to the Bing Maps Snap to Road API.
+Azure Maps can snap coordinates to roads by using the [route directions] API. This service can be used to reconstruct a logical route between a set of coordinates and is comparable to the Bing Maps Snap to Road API.
There are two different ways to use the route directions API to snap coordinates to roads.
-* If there are 150 coordinates or less, they can be passed as waypoints in the GET route directions API. Using this approach two different types of snapped data can be retrieved; route instructions will contain the individual snapped waypoints, while the route path will have an interpolated set of coordinates that fill the full path between the coordinates.
-* If there are more than 150 coordinates, the POST route directions API can be used. The coordinates start and end coordinates have to be passed into the query parameter, but all coordinates can be passed into the `supportingPoints` parameter in the body of the POST request and formatted a GeoJSON geometry collection of points. The only snapped data available using this approach will be the route path that is an interpolated set of coordinates that fill the full path between the coordinates. [Here is an example](https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path) of this approach using the services module in the Azure Maps Web SDK.
+* If there are 150 coordinates or less, they can be passed as waypoints in the `GET` route directions API. Using this approach two different types of snapped data can be retrieved; route instructions contain the individual snapped waypoints, while the route path has an interpolated set of coordinates that fill the full path between the coordinates.
+* If there are more than 150 coordinates, the `POST` route directions API can be used. The coordinates start and end coordinates have to be passed into the query parameter, but all coordinates can be passed into the `supportingPoints` parameter in the body of the `POST` request and formatted a GeoJSON geometry collection of points. The only snapped data available using this approach is the route path that is an interpolated set of coordinates that fill the full path between the coordinates. To see an example of this approach using the services module in the Azure Maps Web SDK, see the [Snap points to logical route path] sample in the Azure Maps samples.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps. | Bing Maps API parameter | Comparable Azure Maps API parameter | |-||
-| `points` | `supportingPoints` ΓÇô pass these points into the body of the post request |
+| `points` | `supportingPoints` ΓÇô pass these points into the body of the `POST` request |
| `interpolate` | N/A | | `includeSpeedLimit` | N/A | | `includeTruckSpeedLimit` | N/A | | `speedUnit` | N/A | | `travelMode` | `travelMode` |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
The Azure Maps routing API also supports truck routing parameter within the same API to ensure logical paths are calculated. The following table cross-references the additional Bing Maps truck routing parameters with the comparable API parameters in Azure Maps.
The Azure Maps routing API also supports truck routing parameter within the same
Since this approach uses the route directions API, the full set of options in that service can be used to customize the logic used to snap the coordinate to roads. For example, specifying a departure time would result in historic traffic data being taken into consideration.
-The Azure Maps route directions API does not currently return speed limit data, however that can be retrieved using the Azure Maps reverse geocoding API.
+The Azure Maps route directions API doesn't currently return speed limit data, however that can be retrieved using the Azure Maps reverse geocoding API.
**Using the Web SDK to snap coordinates**
-The Azure Maps Web SDK uses vector tiles to render the maps. These vector tiles contain the raw road geometry information and can be used to calculate the nearest road to a coordinate for simple snapping of individual coordinates. This is useful when you want the coordinates to visually appear over roads and you are already using the Azure Maps Web SDK to visualize the data.
+The Azure Maps Web SDK uses vector tiles to render the maps. These vector tiles contain the raw road geometry information and can be used to calculate the nearest road to a coordinate for simple snapping of individual coordinates. This is useful when you want the coordinates to visually appear over roads and you're already using the Azure Maps Web SDK to visualize the data.
-This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. [Here is a code sample](https://samples.azuremaps.com/?sample=basic-snap-to-road-logic) that demonstrates this.
+This approach however will only snap to the road segments that are loaded within the map view. When zoomed out at country level there may be no road data, so snapping canΓÇÖt be done, however at that zoom level a single pixel can represent the area of several city blocks so snapping isnΓÇÖt needed. To address this, the snapping logic can be applied every time the map has finished moving. To see a fully functional example of this snapping logic, see the [Basic snap to road logic] sample in the Azure Maps samples.
**Using the Azure Maps vector tiles directly to snap coordinates**
-The Azure Maps vector tiles contain the raw road geometry data that can be used to calculate the nearest point on a road to a coordinate to do basic snapping of individual coordinates. All road segments appear in the sectors at zoom level 15, so you will want to retrieve tiles from there. You can then use the [quadtree tile pyramid math](./zoom-levels-and-tile-grid.md) to determine that tiles are needed and convert the tiles to geometries. From there a spatial math library, such as [turf js](https://turfjs.org/) or [NetTopologySuite](https://github.com/NetTopologySuite/NetTopologySuite) can be used to calculate the closest line segments.
+The Azure Maps vector tiles contain the raw road geometry data that can be used to calculate the nearest point on a road to a coordinate to do basic snapping of individual coordinates. All road segments appear in the sectors at zoom level 15, so you want to retrieve tiles from there. You can then use the [quadtree tile pyramid math] to determine that tiles are needed and convert the tiles to geometries. From there a spatial math library, such as [turf js] or [NetTopologySuite] can be used to calculate the closest line segments.
## Retrieve a map image (Static Map)
-Azure Maps provides an API for rendering the static map images with data overlaid. The Azure Maps [Map image render](/rest/api/maps/render/getmapimagerytile) API is comparable to the static map API in Bing Maps.
+Azure Maps provides an API for rendering the static map images with data overlaid. The Azure Maps [Map image render] API is comparable to the static map API in Bing Maps.
> [!NOTE]
-> Azure Maps requires the center, all pushpins and path locations to be coordinates in `longitude,latitude` format whereas Bing Maps uses the `latitude,longitude` format.</p>
-<p>Addresses will need to be geocoded first.
+> Azure Maps requires the center, all pushpins and path locations to be coordinates in `longitude,latitude` format whereas Bing Maps uses the `latitude,longitude` format. Addresses will need to be geocoded first.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross-references the Bing Maps API parameters with the compa
| `centerPoint` | `center` | | `format` | `format` ΓÇô specified as part of URL path. Currently only PNG supported. | | `heading` | N/A ΓÇô Streetside not supported. |
-| `imagerySet` | `layer` and `style` ΓÇô See [Supported map styles](./supported-map-styles.md) documentation. |
+| `imagerySet` | `layer` and `style` ΓÇô For more information, see [Supported map styles].|
| `mapArea` (`ma`) | `bbox` | | `mapLayer` (`ml`) | N/A | | `mapSize` (`ms`) | `width` and `height` ΓÇô can be up to 8192x8192 in size. |
The following table cross-references the Bing Maps API parameters with the compa
| `highlightEntity` (`he`) | N/A | | `style` | N/A | | route parameters | N/A |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
> [!NOTE] > Azure Maps uses a tile system with tiles that are twice the size of the map tiles used in Bing Maps. As such, the zoom level value in Azure Maps will appear one zoom level closer in Azure Maps compared to Bing Maps. Lower the zoom level in the requests you are migrating by 1 to compensate for this.
-See the [How-to guide on the map image render API](./how-to-render-custom-data.md) for more information.
+For more information, see [Render custom data on a raster map].
-In addition to being able to generate a static map image, the Azure Maps render service also provides the ability to directly access map tiles in raster (PNG) and vector format;
+In addition to being able to generate a static map image, the Azure Maps render service also enables direct access to map tiles in raster (PNG) and vector format:
-* [Map tile](/rest/api/maps/render/getmaptile) ΓÇô Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
-* [Map imagery tile](/rest/api/maps/render/getmapimagerytile) ΓÇô Retrieve aerial and satellite imagery tiles.
+* [Map tiles] ΓÇô Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
+* [Map imagery tile] ΓÇô Retrieve aerial and satellite imagery tiles.
### Pushpin URL parameter format comparison
In Bing Maps, pushpins can be added to a static map image by using the `pushpin`
> `&pushpin=latitude,longitude;iconStyle;label`
-Additional pushpins can be added by adding additional `pushpin` parameters to the URL with a different set of values. Pushpin icon styles are limited to one of the predefined styles available in the Bing Maps API.
+Pushpins can be added by adding more `pushpin` parameters to the URL with a different set of values. Pushpin icon styles are limited to one of the predefined styles available in the Bing Maps API.
For example, in Bing Maps, a red pushpin with the label "AB" can be added to the map at coordinates (longitude: -110, latitude: 45) with the following URL parameter:
In Azure Maps, pushpins can also be added to a static map image by specifying th
> `&pins=iconType|pinStyles||pinLocation1|pinLocation2|...`
-Additional styles can be used by adding additional `pins` parameters to the URL with a different style and set of locations.
+Additional styles can be used by adding more `pins` parameters to the URL with a different style and set of locations.
-When it comes to pin locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma** separating longitude and latitude in Azure Maps.
+Regarding pin locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma** separating longitude and latitude in Azure Maps.
The `iconType` value specifies the type of pin to create and can have the following values: * `default` ΓÇô The default pin icon.
-* `none` ΓÇô No icon is displayed, only labels will be rendered.
+* `none` ΓÇô No icon is displayed, only labels are rendered.
* `custom` ΓÇô Specifies a custom icon is to be used. A URL pointing to the icon image can be added to the end of the `pins` parameter after the pin location information. * `{udid}` ΓÇô A Unique Data ID (UDID) for an icon stored in the Azure Maps Data Storage platform.
-Pin styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `iconType|optionName1Value1|optionName2Value2`. Note the option names and values are not separated. The following style option names can be used to style pushpins in Azure Maps:
+Pin styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `iconType|optionName1Value1|optionName2Value2`. Note the option names and values aren't separated. The following style option names can be used to style pushpins in Azure Maps:
* `al` ΓÇô Specifies the opacity (alpha) of the pushpins. Can be a number between 0 and 1. * `an` ΓÇô Specifies the pin anchor. X and y pixel values specified in the format `x y`.
In Bing Maps, lines, and polygons can be added to a static map image by using th
> `&drawCurve=shapeType,styleType,location1,location2...`
-Additional styles can be used by adding additional `drawCurve` parameters to the URL with a different style and set of locations.
+More styles can be used by adding additional `drawCurve` parameters to the URL with a different style and set of locations.
Locations in Bing Maps are specified with the format `latitude1,longitude1_latitude2,longitude2_…`. Locations can also be encoded.
In Azure Maps, lines and polygons can also be added to a static map image by spe
> `&path=pathStyles||pathLocation1|pathLocation2|...`
-When it comes to path locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma separating** longitude and latitude in Azure Maps. Azure Maps does not support encoded paths currently. Larger data sets can be uploaded as a GeoJSON fills into the Azure Maps Data Storage API as documented [here](./how-to-render-custom-data.md#upload-pins-and-path-data).
+When it comes to path locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma separating** longitude and latitude in Azure Maps. Azure Maps doesn't support encoded paths currently. Larger data sets can be uploaded as a GeoJSON fills into the Azure Maps Data Storage API. For more information, see [Upload pins and path data](./how-to-render-custom-data.md#upload-pins-and-path-data).
-Path styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `optionName1Value1|optionName2Value2`. Note the option names and values are not separated. The following style option names can be used to style paths in Azure Maps:
+Path styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `optionName1Value1|optionName2Value2`. Note the option names and values aren't separated. The following style option names can be used to style paths in Azure Maps:
* `fa` ΓÇô The fill color opacity (alpha) used when rendering polygons. Can be a number between 0 and 1. * `fc` ΓÇô The fill color used to render the area of a polygon.
For example, in Azure Maps, a blue line with 50% opacity and a thickness of four
## Calculate a distance matrix
-Azure Maps provides an API for calculating the travel times and distances between a set of locations as a distance matrix. The Azure Maps distance matrix API is comparable to the distance matrix API in Bing Maps;
+Azure Maps provides an API for calculating the travel times and distances between a set of locations as a distance matrix. The Azure Maps distance matrix API is comparable to the distance matrix API in Bing Maps:
-* [Route matrix](/rest/api/maps/route/postroutematrixpreview): Asynchronously calculates travel times and distances for a set of origins and destinations. Up to 700 cells per request is supported (the number of origins multiplied by the number of destinations). With that constraint in mind, examples of possible matrix dimensions are: `700x1`, `50x10`, `10x10`, `28x25`, `10x70`.
+* [Route matrix]: Asynchronously calculates travel times and distances for a set of origins and destinations. Up to 700 cells per request is supported (the number of origins multiplied by the number of destinations). With that constraint in mind, examples of possible matrix dimensions are: `700x1`, `50x10`, `10x10`, `28x25`, `10x70`.
> [!NOTE]
-> A request to the distance matrix API can only be made using a POST request with the origin and destination information in the body of the request.</p>
-<p>Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
+> A request to the distance matrix API can only be made using a `POST` request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps. | Bing Maps API parameter | Comparable Azure Maps API parameter | |-|-|
-| `origins` | `origins` ΓÇô specify in the POST request body as GeoJSON. |
-| `destinations` | `destination` ΓÇô specify in the POST request body as GeoJSON. |
+| `origins` | `origins` ΓÇô specify in the `POST` request body as GeoJSON. |
+| `destinations` | `destination` ΓÇô specify in the `POST` request body as GeoJSON.|
| `endTime` | `arriveAt` | | `startTime` | `departAt` | | `travelMode` | `travelMode` | | `resolution` | N/A | | `distanceUnit` | N/A ΓÇô All distances in meters. | | `timeUnit` | N/A ΓÇô All times in seconds. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
> [!TIP] > All the advanced routing options available in the Azure Maps routing API (truck routing, engine specifications, avoid…) is support in the Azure Maps distance matrix API. ## Calculate an isochrone
-Azure Maps provides an API for calculating an isochrone, a polygon covering an area that can be traveled to in any direction from an origin point within a specified amount of time or amount of fuel/charge. The Azure Maps route range API is comparable to the isochrone API in Bing Maps;
+Azure Maps provides an API for calculating an isochrone, a polygon covering an area that can be traveled to in any direction from an origin point within a specified amount of time or amount of fuel/charge. The Azure Maps route range API is comparable to the isochrone API in Bing Maps.
-* [Route](/rest/api/maps/route/getrouterange) Range**: Calculate a polygon covering an area that can be traveled to in any direction from an origin point within a specified amount of time, distance, or amount of fuel/charge available.
+* [Route] Range: Calculate a polygon covering an area that can be traveled to in any direction from an origin point within a specified amount of time, distance, or amount of fuel/charge available.
> [!NOTE] > Azure Maps requires the query origin to be a coordinate. Addresses will need to be geocoded first.</p>
The following table cross-references the Bing Maps API parameters with the compa
| `maxDistance` (`maxDis`) | `distanceBudgetInMeters` | | `distanceUnit` (`du`) | N/A ΓÇô All distances in meters. | | `optimize` (`optmz`) | `routeType` |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
> [!TIP] > All the advanced routing options available in the Azure Maps routing API (truck routing, engine specifications, avoid…) is support in the Azure Maps isochrone API.
The following table cross-references the Bing Maps API parameters with the compa
Point of interest data can be searched in Bing Maps by using the following APIs:
-* **Local search:** Searches for points of interest that are nearby (radial search), by name, or by entity type (category). The Azure Maps [POI search](/rest/api/maps/search/getsearchpoi) and [POI category search](/rest/api/maps/search/getsearchpoicategory) APIs are most like this API.
-* **Location recognition**: Searches for points of interests that are within a certain distance of a location. The Azure Maps [nearby search](/rest/api/maps/search/getsearchnearby) API is most like this API.
-* **Local insights:** Searches for points of interests that are within a specified maximum driving time or distance from a specific coordinate. This is achievable with Azure Maps by first calculating an isochrone and then passing it into the [search within geometry](/rest/api/maps/search/postsearchinsidegeometry) API.
+* **Local search**: Searches for points of interest that are nearby (radial search), by name, or by entity type (category). The Azure Maps [POI search] and [POI category search] APIs are most like this API.
+* **Location recognition**: Searches for points of interests that are within a certain distance of a location. The Azure Maps [nearby search] API is most like this API.
+* **Local insights**: Searches for points of interests that are within a specified maximum driving time or distance from a specific coordinate. This is achievable with Azure Maps by first calculating an isochrone and then passing it into the [Search within geometry] API.
Azure Maps provides several search APIs for points of interest:
-* [POI search](/rest/api/maps/search/getsearchpoi): Search for points of interests by name. For example; `"starbucks"`.
-* [POI category search](/rest/api/maps/search/getsearchpoicategory): Search for points of interests by category. For example; "restaurant".
-* [Nearby search](/rest/api/maps/search/getsearchnearby): Searches for points of interests that are within a certain distance of a location.
-* [Fuzzy search](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Search within geometry](/rest/api/maps/search/postsearchinsidegeometry): Search for points of interests that are within a specified geometry (polygon).
-* [Search along route](/rest/api/maps/search/postsearchalongroute): Search for points of interests that are along a specified route path.
-* [Fuzzy batch search](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [POI search]: Search for points of interests by name. For example, `"starbucks"`.
+* [POI category search]: Search for points of interests by category. For example, "restaurant".
+* [Search within geometry]: Searches for points of interests that are within a certain distance of a location.
+* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* [Search within geometry]: Search for points of interests that are within a specified geometry (polygon).
+* [Search along route]: Search for points of interests that are along a specified route path.
+* [Fuzzy batch search]: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
-Be sure to review the [best practices for search](./how-to-use-best-practices-for-search.md) documentation.
+For more information on searching in Azure Maps, see [Best practices for Azure Maps Search service].
## Get traffic incidents
-Azure Maps provides several APIs for retrieving traffic data. There are two types of traffic data available;
+Azure Maps provides several APIs for retrieving traffic data. There are two types of traffic data available:
* **Flow data** ΓÇô provides metrics on the flow of traffic on sections of roads. This is often used to color code roads. This data is updated every 2 minutes. * **Incident data** ΓÇô provides data on construction, road closures, accidents, and other incidents that may affect traffic. This data is updated every minute. Bing Maps provides traffic flow and incident data in its interactive map controls, and also make incident data available as a service.
-Traffic data is also integrated into the Azure Maps interactive map controls. Azure maps also provides the following traffic services APIs;
+Traffic data is also integrated into the Azure Maps interactive map controls. Azure maps also provides the following traffic services APIs:
-* [Traffic flow segments](/rest/api/maps/traffic/gettrafficflowsegment): Provides information about the speeds and travel times of the road fragment closest to the given coordinates.
-* [Traffic flow tiles](/rest/api/maps/traffic/gettrafficflowtile): Provides raster and vector tiles containing traffic flow data. These
+* [Traffic flow segments]: Provides information about the speeds and travel times of the road fragment closest to the given coordinates.
+* [Traffic flow tiles]: Provides raster and vector tiles containing traffic flow data. These
can be used with the Azure Maps controls or in third-party map controls such as Leaflet. The vector tiles can also be used for advanced data analysis.
-* [Traffic incident details](/rest/api/maps/traffic/gettrafficincidentdetail): Provides traffic incident details that are within a bounding box, zoom level, and traffic model.
-* [Traffic incident tiles](/rest/api/maps/traffic/gettrafficincidenttile): Provides raster and vector tiles containing traffic incident data.
-* [Traffic incident viewport](/rest/api/maps/traffic/gettrafficincidentviewport): Retrieves the legal and technical information for the viewport described in the request, such as the traffic model ID.
+* [Traffic incident details]: Provides traffic incident details that are within a bounding box, zoom level, and traffic model.
+* [Traffic incident tiles]: Provides raster and vector tiles containing traffic incident data.
+* [Traffic incident viewport]: Retrieves the legal and technical information for the viewport described in the request, such as the traffic model ID.
The following table cross-references the Bing Maps traffic API parameters with the comparable traffic incident details API parameters in Azure Maps.
The following table cross-references the Bing Maps traffic API parameters with t
| `includeLocationCodes` | N/A | | `severity` (`s`) | N/A ΓÇô all data returned | | `type` (`t`) | N/A ΓÇô all data returned |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
## Get a time zone
-Azure Maps provides an API for retrieving the time zone a coordinate is in. The Azure Maps time zone API is comparable to the time zone API in Bing Maps;
+Azure Maps provides an API for retrieving the time zone a coordinate is in. The Azure Maps time zone API is comparable to the time zone API in Bing Maps.
-* [Time zone by coordinate](/rest/api/maps/timezone/gettimezonebycoordinates): Specify a coordinate and get the details for the time zone it falls in.
+* [Time zone by coordinate]: Specify a coordinate and get the details for the time zone it falls in.
The following table cross-references the Bing Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross-references the Bing Maps API parameters with the compa
| `query` | N/A - locations must be geocoded first. | | `dateTime` | `timeStamp` | | `includeDstRules` | N/A ΓÇô Always included in response by Azure Maps. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](./azure-maps-authentication.md) documentation. |
-| `culture` (`c`) | `language` ΓÇô See [supported languages](./supported-languages.md) documentation. |
-| `userRegion` (`ur`) | `view` ΓÇô See [supported views](./supported-languages.md#azure-maps-supported-views) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `culture` (`c`) | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
+| `userRegion` (`ur`) | `view` ΓÇô For more information, see [Azure Maps supported views]. |
-In addition to this the Azure Maps platform also provides a number of additional time zone APIs to help with conversions with time zone names and IDs;
+In addition to this the Azure Maps platform also provides many other time zone APIs to help with conversions with time zone names and IDs:
-* [Time zone by ID](/rest/api/maps/timezone/gettimezonebyid): Returns current, historical, and future time zone information for the specified IANA time zone ID.
-* [Time zone Enum IANA](/rest/api/maps/timezone/gettimezoneenumiana): Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
-* [Time zone Enum Windows](/rest/api/maps/timezone/gettimezoneenumwindows): Returns a full list of Windows Time Zone IDs.
-* [Time zone IANA version](/rest/api/maps/timezone/gettimezoneianaversion): Returns the current IANA version number used by Azure Maps.
-* [Time zone Windows to IANA](/rest/api/maps/timezone/gettimezonewindowstoiana): Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
+* [Time zone by ID]: Returns current, historical, and future time zone information for the specified IANA time zone ID.
+* [Time zone Enum IANA]: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
+* [Time zone Enum Windows]: Returns a full list of Windows Time Zone IDs.
+* [Time zone IANA version]: Returns the current IANA version number used by Azure Maps.
+* [Time zone Windows to IANA]: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
## Spatial Data Services (SDS)
Batch geocoding is the process of taking a large number of addresses or places,
Bing Maps allows up to 200,000 addresses to be passed in a single batch geocode request. This request goes into a queue and usually processes over a period of time, anywhere from a few minutes to a few hours depending on the size of the data set and the load on the service. Each address in the request generated a transaction.
-Azure Maps has a batch geocoding service, however it allows up to 10,000 addresses to be passed in a single request and is processed over seconds to a few minutes depending on the size of the data set and the load on the service. Each address in the request generated a transaction. In Azure Maps, the batch geocoding service is only available the Gen 2 or S1 pricing tier. For more information on pricing tiers, see [Choose the right pricing tier in Azure Maps](choose-pricing-tier.md).
+Azure Maps has a batch geocoding service, however it allows up to 10,000 addresses to be passed in a single request and is processed over seconds to a few minutes depending on the size of the data set and the load on the service. Each address in the request generated a transaction. In Azure Maps, the batch geocoding service is only available the Gen 2 or S1 pricing tier. For more information on pricing tiers, see [Choose the right pricing tier in Azure Maps].
-Another option for geocoding a large number addresses with Azure Maps is to make parallel requests to the standard search APIs. These services only accept a single address per request but can be used with the S0 tier that also provides free usage limits. The S0 tier allows up to 50 requests per second to the Azure Maps platform from a single account. So if you process limit these to stay within that limit, it is possible to geocode upwards of 180,000 address an hour. The Gen 2 or S1 pricing tier doesnΓÇÖt have a documented limit on the number of queries per second that can be made from an account, so a lot more data can be processed faster when using that pricing tier, however using the batch geocoding service will help reduce the total amount of data transferred and will drastically reduce the network traffic.
+Another option for geocoding a large number addresses with Azure Maps is to make parallel requests to the standard search APIs. These services only accept a single address per request but can be used with the S0 tier that also provides free usage limits. The S0 tier allows up to 50 requests per second to the Azure Maps platform from a single account. So if you process limit these to stay within that limit, it's possible to geocode upwards of 180,000 address an hour. The Gen 2 or S1 pricing tier doesnΓÇÖt have a documented limit on the number of queries per second that can be made from an account, so a lot more data can be processed faster when using that pricing tier, however using the batch geocoding service helps reduce the total amount of data transferred, reducing network traffic.
-* [Free-form address geocoding](/rest/api/maps/search/getsearchaddress): Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
-* [Structured address geocoding](/rest/api/maps/search/getsearchaddressstructured): Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
-* [Batch address geocoding](/rest/api/maps/search/postsearchaddressbatchpreview): Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses will be geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
-* [Fuzzy search](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
-* [Fuzzy batch search](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* [Free-form address geocoding]: Specify a single address string (like `"1 Microsoft way, Redmond, WA"`) and process the request immediately. This service is recommended if you need to geocode individual addresses quickly.
+* [Structured address geocoding]: Specify the parts of a single address, such as the street name, city, country, and postal code and process the request immediately. This service is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* [Batch address geocoding]: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This service is recommended for geocoding large data sets.
+* [Fuzzy search]: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category and process the request immediately. This API is recommended for applications where users can search for addresses or points of interest from the same textbox.
+* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
### Get administrative boundary data
-In Bing Maps, administrative boundaries for countries, states, counties, cities, and postal codes are made available via the Geodata API. This API takes in either a coordinate or query to geocode. If a query is passed in, it is geocoded and the coordinates from the first result is used. This API takes the coordinates and retrieves the boundary of the specified entity type that intersects the coordinate. Note that this API did not necessarily return the boundary for the query that was passed in. If a query for `"Seattle, WA"` is passed in, but the entity type value is set to country region, the boundary for the USA would be returned.
+In Bing Maps, administrative boundaries for countries, states, counties, cities, and postal codes are made available via the Geodata API. This API takes in either a coordinate or query to geocode. If a query is passed in, it's geocoded and the coordinates from the first result is used. This API takes the coordinates and retrieves the boundary of the specified entity type that intersects the coordinate. This API didn't necessarily return the boundary for the query that was passed in. If a query for `"Seattle, WA"` is passed in, but the entity type value is set to country region, the boundary for the USA would be returned.
-Azure Maps also provides access to administrative boundaries (countries, states, counties, cities, and postal codes). To retrieve a boundary, you must query one of the search APIs for the boundary you want (i.e. `Seattle, WA`). If the search result has an associated boundary, a geometry ID will be provided in the result response. The search polygon API can then be used to retrieve the exact boundaries for one or more geometry IDs. This is a bit different than Bing Maps as Azure Maps returns the boundary for what was searched for, whereas Bing Maps returns a boundary for a specified entity type at a specified coordinate. Additionally, the boundary data returned by Azure Maps is in GeoJSON format.
+Azure Maps also provides access to administrative boundaries (countries, states, counties, cities, and postal codes). To retrieve a boundary, you must query one of the search APIs for the boundary you want (such as `Seattle, WA`). If the search result has an associated boundary, a geometry ID is provided in the result response. The search polygon API can then be used to retrieve the exact boundaries for one or more geometry IDs. This is a bit different than Bing Maps as Azure Maps returns the boundary for what was searched for, whereas Bing Maps returns a boundary for a specified entity type at a specified coordinate. Additionally, the boundary data returned by Azure Maps is in GeoJSON format.
To recap: 1. Pass a query for the boundary you want to receive into one of the following search APIs.
- * [Free-form address geocoding](/rest/api/maps/search/getsearchaddress)
- * [Structured address geocoding](/rest/api/maps/search/getsearchaddressstructured)
- * [Batch address geocoding](/rest/api/maps/search/postsearchaddressbatchpreview)
- * [Fuzzy search](/rest/api/maps/search/getsearchfuzzy)
- * [Fuzzy batch search](/rest/api/maps/search/postsearchfuzzybatchpreview)
+ * [Free-form address geocoding]
+ * [Structured address geocoding]
+ * [Batch address geocoding]
+ * [Fuzzy search]
+ * [Fuzzy batch search]
-1. If the desired result(s) has a geometry ID(s), pass it into the [Search Polygon API](/rest/api/maps/search/getsearchpolygon).
+1. If the desired result(s) has a geometry ID(s), pass it into the [Search Polygon API].
### Host and query spatial business data
-The spatial data services in Bing Maps provide a simple spatial data storage solution for hosting business data and exposing it as a spatial REST service. This service provides four main queries; find by property, find nearby, find in bounding box, and find with 1 mile of a route. Many companies who use this service, often already have their business data already stored in a database somewhere and have been uploading a small subset of it into this service to power applications like store locators. Since key-based authentication provides basic security, it has been recommended that this service only be used with public facing data.
+The spatial data services in Bing Maps provide simple spatial data storage solution for hosting business data and exposing it as a spatial REST service. This service provides four main queries; find by property, find nearby, find in bounding box, and find with 1 mile of a route. Many companies who use this service, often already have their business data already stored in a database somewhere and have been uploading a small subset of it into this service to power applications like store locators. Since key-based authentication provides basic security, it has been recommended that this service be used only with public facing data.
-Most business location data starts off in a database. As such it is recommended to use existing Azure storage solutions such as Azure SQL or Azure PostgreSQL (with the PostGIS plugin). Both of these storage solutions support spatial data and provide a rich set of spatial querying capabilities. Once your data is in a suitable storage solution, it can then be integrated into your application by creating a custom web service, or by using a framework such as ASP.NET or Entity Framework. Using this approach provides more querying capabilities and as well as much higher security options.
+Most business location data starts off in a database. As such it's recommended to use existing Azure storage solutions such as Azure SQL or Azure PostgreSQL (with the PostGIS plugin). Both of these storage solutions support spatial data and provide a rich set of spatial querying capabilities. Once your data is in a suitable storage solution, it can then be integrated into your application by creating a custom web service, or by using a framework such as ASP.NET or Entity Framework. Using this approach is more secure and provides more querying capabilities.
Azure Cosmos DB also provides a limited set of spatial capabilities that, depending on your scenario, may be sufficient. Here are some useful resources around hosting and querying spatial data in Azure.
-* [Azure SQL Spatial Data Types overview](/sql/relational-databases/spatial/spatial-data-types-overview)
-* [Azure SQL Spatial ΓÇô Query nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor)
-* [Azure Cosmos DB geospatial capabilities overview](../cosmos-db/sql-query-geospatial-intro.md)
+* [Azure SQL Spatial Data Types overview]
+* [Azure SQL Spatial ΓÇô Query nearest neighbor]
+* [Azure Cosmos DB geospatial capabilities overview]
## Client libraries
-Azure Maps provides client libraries for the following programming languages;
+Azure Maps provides client libraries for the following programming languages:
* JavaScript, TypeScript, Node.js ΓÇô [documentation](./how-to-use-services-module.md) \| [npm package](https://www.npmjs.com/package/azure-maps-rest)
-Open-source client libraries for other programming languages;
+Open-source client libraries for other programming languages:
* .NET Standard 2.0 ΓÇô [GitHub project](https://github.com/perfahlen/AzureMapsRestServices) \| [NuGet package](https://www.nuget.org/packages/AzureMapsRestToolkit/)
Learn more about the Azure Maps REST services.
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account+
+[Search]: /rest/api/maps/search
+[Route directions]: /rest/api/maps/route/getroutedirections
+[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
+[Render]: /rest/api/maps/render/getmapimage
+[Route Range]: /rest/api/maps/route/getrouterange
+[POST Route directions]: /rest/api/maps/route/postroutedirections
+[Route]: /rest/api/maps/route
+[Time Zone]: /rest/api/maps/timezone
+[Elevation]: /rest/api/maps/elevation
+
+[Azure Maps Creator]: creator-indoor-maps.md
+[Spatial operations]: /rest/api/maps/spatial
+[Map Tiles]: /rest/api/maps/render/getmaptile
+[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
+[Batch routing]: /rest/api/maps/route/postroutedirectionsbatchpreview
+[Traffic]: /rest/api/maps/traffic
+[Geolocation API]: /rest/api/maps/geolocation/get-ip-to-location
+[Weather services]: /rest/api/maps/weather
+
+[Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md
+[Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md
+
+[free account]: https://azure.microsoft.com/free/?azure-portal=true
+[manage authentication in Azure Maps]: how-to-manage-authentication.md
+
+[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
+[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
+[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
+[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
+
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[Localization support in Azure Maps]: supported-languages.md
+[Azure Maps supported views]: supported-languages.md#azure-maps-supported-views
+
+[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
+[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
+[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
+
+[POI search]: /rest/api/maps/search/get-search-poi
+[POI category search]: /rest/api/maps/search/get-search-poi-category
+[Calculate route]: /rest/api/maps/route/getroutedirections
+[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
+
+[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path?azure-portal=true
+[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic?azure-portal=true
+
+[quadtree tile pyramid math]: zoom-levels-and-tile-grid.md
+[turf js]: https://turfjs.org?azure-portal=true
+[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite?azure-portal=true
+
+[Map image render]: /rest/api/maps/render/getmapimagerytile
+[Supported map styles]: supported-map-styles.md
+[Render custom data on a raster map]: how-to-render-custom-data.md
+
+[Search along route]: /rest/api/maps/search/postsearchalongroute
+[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[nearby search]: /rest/api/maps/search/getsearchnearby
+[Search Polygon API]: /rest/api/maps/search/getsearchpolygon
+[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
+
+[Traffic flow segments]: /rest/api/maps/traffic/gettrafficflowsegment
+[Traffic flow tiles]: /rest/api/maps/traffic/gettrafficflowtile
+[Traffic incident details]: /rest/api/maps/traffic/gettrafficincidentdetail
+[Traffic incident tiles]: /rest/api/maps/traffic/gettrafficincidenttile
+[Traffic incident viewport]: /rest/api/maps/traffic/gettrafficincidentviewport
+
+[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
+[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
+[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
+[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
+[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
+[Time zone by coordinate]: /rest/api/maps/timezone/gettimezonebycoordinates
+
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+
+[Azure SQL Spatial Data Types overview]: /sql/relational-databases/spatial/spatial-data-types-overview
+[Azure SQL Spatial ΓÇô Query nearest neighbor]: /sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor
+[Azure Cosmos DB geospatial capabilities overview]: ../cosmos-db/sql-query-geospatial-intro.md
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
The following table provides a high-level list of Bing Maps features and the rel
| Autosuggest | Γ£ô | | Directions (including truck) | Γ£ô | | Distance Matrix | Γ£ô |
-| Elevations | Γ£ô |
+| Elevations | <sup>1</sup> |
| Imagery ΓÇô Static Map | Γ£ô | | Imagery Metadata | Γ£ô | | Isochrones | Γ£ô |
The following table provides a high-level list of Bing Maps features and the rel
| Traffic Incidents | Γ£ô | | Configuration driven maps | N/A |
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+ Bing Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and highly secure, Azure Active Directory authentication. ## Licensing considerations
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
The table lists key API features in the Google Maps V3 JavaScript SDK and the su
| Geocoder service | Γ£ô | | Directions service | Γ£ô | | Distance Matrix service | Γ£ô |
-| Elevation service | Γ£ô |
+| Elevation service | <sup>1</sup> |
+
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
## Notable differences in the web SDKs
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
You will also learn:
The table shows the Azure Maps service APIs, which have a similar functionality to the listed Google Maps service APIs.
-| Google Maps service API | Azure Maps service API |
-|-||
-| Directions | [Route](/rest/api/maps/route) |
-| Distance Matrix | [Route Matrix](/rest/api/maps/route/postroutematrixpreview) |
-| Geocoding | [Search](/rest/api/maps/search) |
-| Places Search | [Search](/rest/api/maps/search) |
-| Place Autocomplete | [Search](/rest/api/maps/search) |
-| Snap to Road | See [Calculate routes and directions](#calculate-routes-and-directions) section. |
-| Speed Limits | See [Reverse geocode a coordinate](#reverse-geocode-a-coordinate) section. |
-| Static Map | [Render](/rest/api/maps/render/getmapimage) |
-| Time Zone | [Time Zone](/rest/api/maps/timezone) |
-| Elevation | [Elevation](/rest/api/maps/elevation) |
+| Google Maps service API | Azure Maps service API |
+|-|--|
+| Directions | [Route](/rest/api/maps/route) |
+| Distance Matrix | [Route Matrix](/rest/api/maps/route/postroutematrixpreview) |
+| Geocoding | [Search](/rest/api/maps/search) |
+| Places Search | [Search](/rest/api/maps/search) |
+| Place Autocomplete | [Search](/rest/api/maps/search) |
+| Snap to Road | See [Calculate routes and directions](#calculate-routes-and-directions) section. |
+| Speed Limits | See [Reverse geocode a coordinate](#reverse-geocode-a-coordinate) section. |
+| Static Map | [Render](/rest/api/maps/render/getmapimage) |
+| Time Zone | [Time Zone](/rest/api/maps/timezone) |
+| Elevation | [Elevation](/rest/api/maps/elevation)<sup>1</sup> |
+
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
The following service APIs aren't currently available in Azure Maps:
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
The table provides a high-level list of Azure Maps features, which correspond to
| REST Service APIs | Γ£ô | | Directions (Routing) | Γ£ô | | Distance Matrix | Γ£ô |
-| Elevation | Γ£ô |
+| Elevation | <sup>1</sup> |
| Geocoding (Forward/Reverse) | Γ£ô | | Geolocation | Γ£ô | | Nearest Roads | Γ£ô |
The table provides a high-level list of Azure Maps features, which correspond to
| Maps Embedded API | N/A | | Map URLs | N/A |
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+ Google Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and Azure Active Directory authentication. Azure Active Directory authentication provides more security features, compared to the basic key-based authentication. ## Licensing considerations
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Azure Maps Java SDK supports [Java 8][Java 8] or above.
| [Rendering][java rendering readme]| [azure-maps-rendering][java rendering package]|[rendering sample][java rendering sample] | | [Geolocation][java geolocation readme]|[azure-maps-geolocation][java geolocation package]|[geolocation sample][java geolocation sample] | | [Timezone][java timezone readme] | [azure-maps-timezone][java timezone package] | [timezone samples][java timezone sample] |
-| [Elevation][java elevation readme] | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
+| [Elevation][java elevation readme] ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) | [azure-maps-elevation][java elevation package] | [elevation samples][java elevation sample] |
For more information, see the [Java SDK Developers Guide].
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
The following table summarizes the Azure Maps services that generate transaction
| Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-| | [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2)<br>[Data registry](/rest/api/maps/data-registry) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
-| [Elevation (DEM)](/rest/api/maps/elevation)| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>|
+| [Elevation (DEM)](/rest/api/maps/elevation)([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023))| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>|
| [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>| | [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>| | [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 3/24/2023 Last updated : 3/30/2023
In addition to the generally available data collection listed above, Azure Monit
| : | : | : | : | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) | | [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors/windows-forwarded-events.md)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - |
-| [Change Tracking](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |
+| [Change Tracking and Inventory Management](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |
| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
+| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) (available without Azure Monitor Agent) | Migrate to Azure Automation Hybrid Worker Extension - Generally available | None | [Migrate an existing Agent based to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) |
| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | | Azure Stack HCI Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) | | Azure Virtual Desktop (AVD) Insights | private preview | No additional extension installed | [Sign up here](https://aka.ms/amadcr-privatepreviews) |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
[Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent (also known as MMA and OMS) for both Windows and Linux machines, in both Azure and non-Azure (on-premises and third-party clouds) environments. It introduces a simplified, flexible method of configuring collection configuration called [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). This article outlines the benefits of migrating to Azure Monitor Agent and provides guidance on how to implement a successful migration. > [!IMPORTANT]
-> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you're currently using the Log Analytics agent with Azure Monitor or other supported features and services, you should start planning your migration to Azure Monitor Agent by using the information in this article.
+> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you're currently using the Log Analytics agent with Azure Monitor or [other supported features and services](./agents-overview.md#supported-services-and-features), you should start planning your migration to Azure Monitor Agent by using the information in this article and the availability of other solutions/services.
## Benefits
In addition to consolidating and improving upon legacy Log Analytics agents, Azu
2. Deploy extensions and DCR-associations: 1. **Test first** by deploying extensions<sup>2</sup> and DCR-Associations on a few non-production machines. You can also deploy side-by-side on machines running legacy agents (see the section above for agent coexistence
- 2. Once data starts flowing via Azure Monitor agent, **compare it with legacy agent data** to ensure there are no gaps. You can do this by joining with the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' for the new data collection
+ 2. Once data starts flowing via Azure Monitor agent, **compare it with legacy agent data** to ensure there are no gaps. You can do this by joining with the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' for the new data collection.
3. Post testing, you can **roll out broadly**<sup>3</sup> using [built-in policies]() for at-scale deployment of extensions and DCR-associations. **Using policy will also ensure automatic deployment of extensions and DCR-associations for any new machines in future.** 4. Use the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to **monitor the at-scale migration** across your machines
-3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly.
+3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly. You can do this by joining with/looking at the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table which indicates 'Azure Monitor Agent' vs 'Direct Agent' (for legacy).
4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents** 1. If you have migrated to Azure Monitor agent for selected features/solutions and you need to continue using the legacy Log Analytics for others, you can
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Title: Set up the Azure Monitor agent on Windows client devices description: This article describes the instructions to install the agent on Windows 10, 11 client OS devices, configure data collection, manage and troubleshoot the agent. Previously updated : 1/9/2023 Last updated : 3/30/2023
This article provides instructions and guidance for using the client installer f
Using the new client installer described here, you can now collect telemetry data from your Windows client devices in addition to servers and virtual machines. Both the [extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) and this installer use Data Collection rules to configure the **same underlying agent**.
+> [!NOTE]
+> This article provides specific guidance for installing the Azure Monitor agent on Windows client devices, subject to [limitations below](#limitations). For standard installation and management guidance for the agent, refer [the agent extension management guidance here](./azure-monitor-agent-manage.md)
+ ### Comparison with virtual machine extension Here is a comparison between client installer and VM extension for Azure Monitor agent:
Here is a comparison between client installer and VM extension for Azure Monitor
| Virtual machines, scale sets | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework | | On-premises servers | No | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (with Azure Arc agent) | Installs the agent using Azure extension framework, provided for on-premises by installing Arc agent |
+## Limitations
+1. The Windows client installer supports latest Windows machines only that are **Azure AD joined** or hybrid Azure AD joined. More information under [prerequisites](#prerequisites) below
+2. The Data Collection rules need can only target the Azure AD tenant scope, i.e. all DCRs associated to the tenant (via Monitored Object) will apply to all Windows client machines within that tenant with the agent installed using this client installer. **Granular targeting using DCRs is not supported** for Windows client devices yet
+3. No support for Windows machines connected via **Azure private links**
+4. The agent installed using the Windows client installer is designed mainly for Windows desktops or workstations that are **always connected**. While the agent can be installed via this method on laptops, it is not optimized for battery consumption and network limitations on a laptop.
## Prerequisites 1. The machine must be running Windows client OS version 10 RS4 or higher.
Here is a comparison between client installer and VM extension for Azure Monitor
6. Proceed to create the monitored object that you'll associate data collection rules to, for the agent to actually start operating. > [!NOTE]
-> The agent installed with the client installer currently doesn't support updating configuration once it is installed. Uninstall and reinstall AMA to update its configuration.
+> The agent installed with the client installer currently doesn't support updating local agent settings once it is installed. Uninstall and reinstall AMA to update above settings.
## Create and associate a 'Monitored Object'
-You need to create a 'Monitored Object' (MO) that creates a representation for the Azure AD tenant within Azure Resource Manager (ARM). This ARM entity is what Data Collection Rules are then associated with.
+You need to create a 'Monitored Object' (MO) that creates a representation for the Azure AD tenant within Azure Resource Manager (ARM). This ARM entity is what Data Collection Rules are then associated with. **This Monitored Object needs to be created only once for any number of machines in a single AAD tenant**.
Currently this association is only **limited** to the Azure AD tenant scope, which means configuration applied to the tenant will be applied to all devices that are part of the tenant and running the agent. The image below demonstrates how this works:
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file. Text file requirements and best practices:
- - Do store files on the local drive of the machine on which Azure Monitor Agent is running.
+ - Do store files on the local drive of the machine on which Azure Monitor Agent is running and in the directory that is being monitored.
- Do delineate the end of a record with an end of line. - Do use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported. - Do create a new log file every day so that you can remove old files easily. - Do clean up all log files older than 2 days in the monitored directory. Azure Monitor Agent does not delete old log files and tracking them uses up Agent resources. - Do Not overwrite an existing file with new data. You should only append new data to the file.
- - Do Not rename a file and open a new file with the same name to log to.
- - Do Not rename or copy large log files in to the monitored directory.
+ - Do Not rename a file and open a new file with the same name to log to.
+ - Do Not rename or copy large log files in to the monitored directory. If you must, do not exceed 50MB per minute
- Do Not rename files in the monitored directory to a new name that is also in the monitored directory. This can cause incorrect ingestion behavior.
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
Title: Application Insights telemetry data model
-description: This article describes the Application Insights telemetry data model including Request, Dependency, Exception, Trace, Event, Metric, PageView, and Context.
+description: This article describes the Application Insights telemetry data model including request, dependency, exception, trace, event, metric, PageView, and context.
documentationcenter: .net
# Application Insights telemetry data model
-[Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal so that you can analyze the performance and usage of your application. The telemetry model is standardized, so it's possible to create platform and language-independent monitoring.
+[Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal so that you can analyze the performance and usage of your application. The telemetry model is standardized, so it's possible to create platform- and language-independent monitoring.
Data collected by Application Insights models this typical application execution pattern. ![Diagram that shows an Application Insights telemetry data model.](./media/data-model-complete/application-insights-data-model.png)
-The following types of telemetry are used to monitor the execution of your app. Three types are automatically collected by the Application Insights SDK from the web application framework:
+The following types of telemetry are used to monitor the execution of your app. The Application Insights SDK from the web application framework automatically collects these three types:
* [Request](#request): Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives.
To report data model or schema problems and suggestions, use our [GitHub reposit
A request telemetry item in [Application Insights](./app-insights-overview.md) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by a unique `id` and `url` that contain all the execution parameters.
-You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions can be grouped further by `resultCode`. Start time for the request telemetry is defined on the envelope level.
+You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. You can further group success and failure executions by using `resultCode`. Start time for the request telemetry is defined on the envelope level.
Request telemetry supports the standard extensibility model by using custom `properties` and `measurements`.
Request telemetry supports the standard extensibility model by using custom `pro
### Name
-The name of the request represents the code path taken to process the request. A low cardinality value allows for better grouping of requests. For HTTP requests, it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value.
+This field is the name of the request and it represents the code path taken to process the request. A low cardinality value allows for better grouping of requests. For HTTP requests, it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value.
The Application Insights web SDK sends a request name "as is" about letter case. Grouping on the UI is case sensitive, so `GET /Home/Index` is counted separately from `GET /home/INDEX` even though often they result in the same controller and action execution. The reason for that is that URLs in general are [case sensitive](https://www.w3.org/TR/WD-html40-970708/htmlweb.html). You might want to see if all `404` errors happened for URLs typed in uppercase. You can read more about request name collection by the ASP.NET web SDK in the [blog post](https://apmtips.com/posts/2015-02-23-request-name-and-url/).
-**Maximum length**: 1,024 characters
+**Maximum length:** 1,024 characters
### ID ID is the identifier of a request call instance. It's used for correlation between the request and other telemetry items. The ID should be globally unique. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
-**Maximum length**: 128 characters
+**Maximum length:** 128 characters
### URL URL is the request URL with all query string parameters.
-**Maximum length**: 2,048 characters
+**Maximum length:** 2,048 characters
### Source Source is the source of the request. Examples are the instrumentation key of the caller or the IP address of the caller. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
-**Maximum length**: 1,024 characters
+**Maximum length:** 1,024 characters
### Duration
The request duration is formatted as `DD.HH:MM:SS.MMMMMM`. It must be positive a
The response code is the result of a request execution. It's the HTTP status code for HTTP requests. It might be an `HRESULT` value or an exception type for other request types.
-**Maximum length**: 1,024 characters
+**Maximum length:** 1,024 characters
### Success
-Success indicates whether a call was successful or unsuccessful. This field is required. When a request isn't set explicitly to `false`, it's considered to be successful. Set this value to `false` if the operation was interrupted by an exception or a returned error result code.
+Success indicates whether a call was successful or unsuccessful. This field is required. When a request isn't set explicitly to `false`, it's considered to be successful. If an exception or returned error result code interrupted the operation, set this value to `false`.
For web applications, Application Insights defines a request as successful when the response code is less than `400` or equal to `401`. However, there are cases when this default mapping doesn't match the semantics of the application. Response code `404` might indicate "no records," which can be part of regular flow. It also might indicate a broken link. For broken links, you can implement more advanced logic. You can mark broken links as failures only when those links are located on the same site by analyzing the URL referrer. Or you can mark them as failures when they're accessed from the company's mobile application. Similarly, `301` and `302` indicate failure when they're accessed from the client that doesn't support redirect.
-Partially accepted content `206` might indicate a failure of an overall request. For instance, an Application Insights endpoint might receive a batch of telemetry items as a single request. It returns `206` when some items in the batch weren't processed successfully. An increasing rate of `206` indicates a problem that needs to be investigated. Similar logic applies to `207` Multi-Status where the success might be the worst of separate response codes.
+Partially accepted content `206` might indicate a failure of an overall request. For instance, an Application Insights endpoint might receive a batch of telemetry items as a single request. It returns `206` when some items in the batch weren't processed successfully. An increasing rate of `206` indicates a problem that needs to be investigated. Similar logic applies to `207` Multi-Status, where the success might be the worst of separate response codes.
You can read more about the request result code and status code in the [blog post](https://apmtips.com/posts/2016-12-03-request-success-and-response-code/).
You can read more about the request result code and status code in the [blog pos
## Dependency
-Dependency Telemetry (in [Application Insights](./app-insights-overview.md)) represents an interaction of the monitored component with a remote component such as SQL or an HTTP endpoint.
+Dependency telemetry (in [Application Insights](./app-insights-overview.md)) represents an interaction of the monitored component with a remote component such as SQL or an HTTP endpoint.
### Name
-Name of the command initiated with this dependency call. Low cardinality value. Examples are stored procedure name and URL path template.
+This field is the name of the command initiated with this dependency call. It has a low cardinality value. Examples are stored procedure name and URL path template.
### ID
-Identifier of a dependency call instance. Used for correlation with the request telemetry item corresponding to this dependency call. For more information, see [correlation](./correlation.md) page.
+ID is the identifier of a dependency call instance. It's used for correlation with the request telemetry item that corresponds to this dependency call. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
### Data
-Command initiated by this dependency call. Examples are SQL statement and HTTP URL with all query parameters.
+This field is the command initiated by this dependency call. Examples are SQL statement and HTTP URL with all query parameters.
### Type
-Dependency type name. Low cardinality value for logical grouping of dependencies and interpretation of other fields like commandName and resultCode. Examples are SQL, Azure table, and HTTP.
+This field is the dependency type name. It has a low cardinality value for logical grouping of dependencies and interpretation of other fields like `commandName` and `resultCode`. Examples are SQL, Azure table, and HTTP.
### Target
-Target site of a dependency call. Examples are server name, host address. For more information, see [correlation](./correlation.md) page.
+This field is the target site of a dependency call. Examples are server name and host address. For more information, see [Telemetry correlation in Application Insights](./correlation.md).
### Duration
-Request duration in format: `DD.HH:MM:SS.MMMMMM`. Must be less than `1000` days.
+The request duration is in the format `DD.HH:MM:SS.MMMMMM`. It must be less than `1000` days.
### Result code
-Result code of a dependency call. Examples are SQL error code and HTTP status code.
+This field is the result code of a dependency call. Examples are SQL error code and HTTP status code.
### Success
-Indication of successful or unsuccessful call.
+This field is the indication of a successful or unsuccessful call.
### Custom properties
Indication of successful or unsuccessful call.
## Exception
-In [Application Insights](./app-insights-overview.md), an instance of Exception represents a handled or unhandled exception that occurred during execution of the monitored application.
+In [Application Insights](./app-insights-overview.md), an instance of exception represents a handled or unhandled exception that occurred during execution of the monitored application.
-### Problem Id
+### Problem ID
-Identifier of where the exception was thrown in code. Used for exceptions grouping. Typically a combination of exception type and a function from the call stack.
+The problem ID identifies where the exception was thrown in code. It's used for exceptions grouping. Typically, it's a combination of an exception type and a function from the call stack.
-Max length: 1024 characters
+**Maximum length:** 1,024 characters
### Severity level
-Trace severity level. Value can be `Verbose`, `Information`, `Warning`, `Error`, `Critical`.
+This field is the trace severity level. The value can be `Verbose`, `Information`, `Warning`, `Error`, or `Critical`.
### Exception details
Trace telemetry in [Application Insights](./app-insights-overview.md) represents
Trace message.
-**Maximum length**: 32,768 characters
+**Maximum length:** 32,768 characters
### Severity level Trace severity level.
-**Values**: `Verbose`, `Information`, `Warning`, `Error`, and `Critical`
+**Values:** `Verbose`, `Information`, `Warning`, `Error`, and `Critical`
### Custom properties
Trace severity level.
## Event
-You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically, it's a user interaction such as a button click or order checkout. It can also be an application lifecycle event like initialization or a configuration update.
+You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically, it's a user interaction such as a button click or an order checkout. It can also be an application lifecycle event like initialization or a configuration update.
-Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be subject to separate, less aggressive [sampling](./api-filtering-sampling.md).
+Semantically, events might or might not be correlated to requests. If used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be subject to separate, less aggressive [sampling](./api-filtering-sampling.md).
### Name
-Event name: To allow proper grouping and useful metrics, restrict your application so that it generates a few separate event names. For example, don't use a separate name for each generated instance of an event.
+**Event name:** To allow proper grouping and useful metrics, restrict your application so that it generates a few separate event names. For example, don't use a separate name for each generated instance of an event.
**Maximum length:** 512 characters
Event name: To allow proper grouping and useful metrics, restrict your applicati
## Metric
-There are two types of metric telemetry supported by [Application Insights](./app-insights-overview.md): single measurement and pre-aggregated metric. Single measurement is just a name and value. Pre-aggregated metric specifies minimum and maximum value of the metric in the aggregation interval and standard deviation of it.
+[Application Insights](./app-insights-overview.md) supports two types of metric telemetry: single measurement and preaggregated metric. Single measurement is just a name and value. Preaggregated metric specifies the minimum and maximum value of the metric in the aggregation interval and the standard deviation of it.
-Pre-aggregated metric telemetry assumes that aggregation period was one minute.
+Preaggregated metric telemetry assumes that the aggregation period was one minute.
-There are several well-known metric names supported by Application Insights. These metrics placed into performanceCounters table.
+Application Insights supports several well-known metric names. These metrics are placed into the `performanceCounters` table.
-Metric representing system and process counters:
+The following table shows the metrics that represent system and process counters.
-| **.NET name** | **Platform agnostic name** | **Description**
-| - | -- | -
-| `\Processor(_Total)\% Processor Time` | Work in progress... | total machine CPU
-| `\Memory\Available Bytes` | Work in progress... | Shows the amount of physical memory, in bytes, available to processes running on the computer. It is calculated by summing the amount of space on the zeroed, free, and standby memory lists. Free memory is ready for use; zeroed memory consists of pages of memory filled with zeros to prevent later processes from seeing data used by a previous process; standby memory is memory that has been removed from a process's working set (its physical memory) en route to disk but is still available to be recalled. See [Memory Object](/previous-versions/ms804008(v=msdn.10))
-| `\Process(??APP_WIN32_PROC??)\% Processor Time` | Work in progress... | CPU of the process hosting the application
-| `\Process(??APP_WIN32_PROC??)\Private Bytes` | Work in progress... | memory used by the process hosting the application
-| `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | Work in progress... | rate of I/O operations runs by process hosting the application
-| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests/Sec` | Work in progress... | rate of requests processed by application
-| `\.NET CLR Exceptions(??APP_CLR_PROC??)\# of Exceps Thrown / sec` | Work in progress... | rate of exceptions thrown by application
-| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Request Execution Time` | Work in progress... | average requests execution time
-| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests In Application Queue` | Work in progress... | number of requests waiting for the processing in a queue
+| .NET name | Platform-agnostic name | Description
+| - | -- | -
+| `\Processor(_Total)\% Processor Time` | Work in progress... | Total machine CPU.
+| `\Memory\Available Bytes` | Work in progress... | Shows the amount of physical memory, in bytes, available to processes running on the computer. It's calculated by summing the amount of space on the zeroed, free, and standby memory lists. Free memory is ready for use. Zeroed memory consists of pages of memory filled with zeros to prevent later processes from seeing data used by a previous process. Standby memory is memory that's been removed from a process's working set (its physical memory) en route to disk but is still available to be recalled. See [Memory Object](/previous-versions/ms804008(v=msdn.10)).
+| `\Process(??APP_WIN32_PROC??)\% Processor Time` | Work in progress... | CPU of the process hosting the application.
+| `\Process(??APP_WIN32_PROC??)\Private Bytes` | Work in progress... | Memory used by the process hosting the application.
+| `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | Work in progress... | Rate of I/O operations run by the process hosting the application.
+| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests/Sec` | Work in progress... | Rate of requests processed by an application.
+| `\.NET CLR Exceptions(??APP_CLR_PROC??)\# of Exceps Thrown / sec` | Work in progress... | Rate of exceptions thrown by an application.
+| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Request Execution Time` | Work in progress... | Average request execution time.
+| `\ASP.NET Applications(??APP_W3SVC_PROC??)\Requests In Application Queue` | Work in progress... | Number of requests waiting for the processing in a queue.
-See [Metrics - Get](/rest/api/application-insights/metrics/get) for mor